Center of Excellence in Information Systems Assurance Research and Education

Home | Up | Program Committee | Presentation Slides | Call for Papers | Final Paper Format | Registration | Hotel

Detailed Bio Abstracts


Dr. Bhavani Thuraisingham is the Program Director for Cyber Trust and Data and Applications Security at the National Science Foundation and has been on IPA to NSF from the MITRE Corporation since October 2001. She is part of a team at NSF setting directions for cyber security and data mining for counter-terrorism. She has been with MITRE since January 1989 where was the department head in Data and Information Management in the Information Technology Division and later chief scientist in data management in MITRE's Information Technology Directorate. She has conducted research in secure databases for over eighteen years and is the recipient of IEEE Computer Society's 1997 Technical Achievement Award and recently IEEE's 2003 Fellow Award for her work in database security. She is also a 2003 Fellow of the American Association for the Advancement of Science. Dr. Thuraisingham has published over 200 refereed conference papers and over 60 journal articles in secure data management and information technology. She serves (or has served) on editorial boards of journals including IEEE Transactions on Knowledge and Data Engineering, ACM Transactions on Information and Systems Security, the Journal of Computer Security and Computer Standards and Interface Journal. She is the inventor of three patents for MITRE on Database Inference Control and has written 6 books on data management and data mining for technical managers and is currently writing a text book on database and application security based on her work the past eighteen years. Her research interests are in secure semantic web, sensor information security, secure semantic web and data mining for count-terrorism.


Knowledge management is about corporations sharing the resources, expertise, as well as building intellectual capital so that it can increase its competitiveness. While knowledge management practices have been around for decades, its only with advent of the web that knowledge management has emerged as a technology area. Corporations with Intranets promote knowledge management as the employees can learn about various advances in technology, get corporation information and find the expertise in the corporation. Furthermore, when experts leave the corporation through retirement or otherwise, its important to capture their knowledge and their practices so that the corporation does not lose the valuable information acquired through many years of hard work.

One of the challenges in knowledge management is maintaining security. Now, knowledge management includes many technologies such as data mining, multimedia, collaboration and the web. Therefore, security for say web data management, multimedia systems and collaboration systems all contribute toward securing knowledge management practices. In addition, one needs to protect the corporation’s assets such as its intellectual property. Trade secrets have to be kept highly confidential so that competitors do not have any access to it. This means one needs to enforce some form of access control such as role-based access control, credential mechanisms and encryption.

Secure knowledge management extends the knowledge management concepts, tools and strategies with security properties. That is, to have secure knowledge management, we need to have secure strategies, processes and metrics. That is, metrics must include support for security related information. Processes must include secure operations. Strategies must include security strategies such as secure data dissemination for an organization. Various knowledge management architectures need to identify the security critical components. For example, when knowledge is created, the creator may specify whom the knowledge can be transferred to. Additional access control techniques may be enforced by the manager of the knowledge. Knowledge sharing and knowledge transfer operations must also enforce the access control and security policies. Secure knowledge management architecture may be built around the corporation’s Intranet. This is an area that has received little attention.

The presentation will examine knowledge management strategies, processes, metrics, architectures and other concepts and will discuss the security impact on the various components. We will explore the relationship between secure knowledge management and securing say the semantic web and related technologies. We will also discuss research directions for knowledge management.




Mr. Hun Kim leads the Strategic Initiatives efforts at the Department of Homeland Security, National Cyber Security Division.  He directs the Critical Infrastructure Cyber Security, Software Assurance Program, Training and Education, R&D and Standards, Policy and Best Practice, and Exercise Planning and Coordination groups.

Previously, Mr. Kim was the Department of the Navy Critical Infrastructure Protection Program Director.  He was responsible for developing overall strategy, policy, and guidance, as well as directing implementation of the Navy and Marine Corps Critical Infrastructure Protection effort.  He also served as the Chairman of the IT Security Professional Certification Subcommittee under the President’s Critical Infrastructure Protection Board.  





Mr. McQuay is Technical Advisor, Collaborative Simulation Technology and Applications Branch, Information Systems Division, Information Directorate, Air Force Research Laboratory, Wright Research Site.   He is an internationally recognized expert in modeling and simulation and distributed collaborative environment technology.   He has over 33 years experience in research for advanced simulation technology and the development and utilization of digital simulation models for the Air Force research, development, acquisition, and test and evaluation process.  He chairs an Integrated Product Team which is developing and implementing information and simulation technologies to create a Collaborative Enterprise Environment (CEE) that allows geographically dispersed individuals working together to share and exchange data, information, knowledge, and actions.

In 2003, Mr. McQuay was selected as an AFRL Fellow for life time achievements and significant advancements in modeling and simulation technologies and collaboration sciences.  He has a distinguished record of contributions in modeling and simulation going back to the early days of computer based simulation of electronic phenomena through today’s use of simulation to support operational decision makers.

Mr. McQuay received  a B.S (Mathematics) from  Towson University, a Certificate in Meteorology from Texas A&M University, a Master of Applied Science (Operations Research) from Southern Methodist University and a  Master of Science in Engineering (Computer Science) from The Johns Hopkins University. 


The Air Force Research Laboratory (AFRL) initiated the Secure Knowledge Management (SKM) Program under the Wright Brothers Institute to provide revolutionary and visionary technologies in information & knowledge creation and sharing.  The SKM Program is sponsoring basic and applied technology research in the areas of secure knowledge discovery, creation, management, and use for enhanced decision support.  The SKM Program is a collaborative effort with participation from government, industry, and academia.  Cooperative efforts between the Air Force and industry provide the Air Force with an opportunity to influence the direction of commercial information technology  developments and interoperability of government and commercial systems.  The SKM Program is sponsoring several basic research initiatives and is developing the Aerospace Knowledge Repository (AKR), an advanced knowledge-based application, for decision support.  The SKM effort also supports the State of Ohio initiative Wright Center of Innovation for Advanced Data Management and Analysis, which will also be discussed.




Margaret E. Grayson is the President and CEO of V-ONE Corporation, a pioneer in the development of secure remote access solutions for the Internet.  V-ONE offers a suite of enterprise class software products and hardware appliances that provide a complete virtual private network solution for V-ONE’s customers. Fortune 1000 corporations and sensitive government agencies worldwide use V-ONE innovative technology for both wired and wireless integrated authentication, encryption and access control.

Before joining V-ONE, Ms. Grayson served as Vice President and then CFO for SPACEHAB, Inc, and Chief Financial Officer for CD Radio, Inc. in Washington DC, an early entrant in the satellite radio mobile communications market. Previously, Ms. Grayson served as a senior executive and consultant to high technology start-up companies.  She was principal financial advisor for raising private and public financing, investor relations, structuring and negotiating joint ventures and completing five successful acquisitions, both domestic and international. 

As a member of the National Infrastructure Advisory Council (NIAC), Ms. Grayson provides advice to the President on cyber security, and is widely acknowledged as a respected activist in the security community.  Margaret has published a number of articles on security and the protection of cyberspace, and is a contributing author to Inside the Minds: Security Matters. She is a frequent speaker on topics such as corporate risk management, enterprise network security, and government law enforcement and first responder information sharing for homeland security.

Ms. Grayson holds an M.B.A from the University of South Florida and a B.S. in Accounting from the State University of New York at Buffalo. Margaret is on the Board of Directors for the Montgomery College Foundation and the Dean’s Advisory Council for the School of Management at the State University of New York at Buffalo. She has also been named to the Advisory Board for the Center of Excellence in Information Assurance at SUNY Buffalo.


The tragic events of September 11, 2001, have highlighted the critical importance of information sharing for our national defense. In fact, experts have said that one of the basic failures of Sept 11 was the lack of a “network of people” with visibility into information. At the same time, the advent of extranets, that can grant employees, customers, suppliers and business partners access to internal information assets, has raised the demand to share corporate knowledge. What is needed is an effective mechanism to enable collaborative information sharing, while securing information flows.  This presentation will consider the human and technological factors that impact secure knowledge management; including: the challenges of establishing cyber-trust, the need to effectively reach a mobile workforce, the importance of privacy, and the issues associated with governance responsibilities.  Real world examples and best practices will be discussed that lead to improved “information liquidity” for the corporate and government community in the secure sharing of sensitive data.




Alan Marwick manages the Knowledge Management Technology department of the IBM Thomas J. Watson Research Center in Yorktown, New York. The department develops new technologies and applications in natural language processing, information retrieval, text analysis, and text mining. These technologies are helpful in connecting people to information and to other people who can help with some task, hence the name of the department.

Alan received a D.Phil.  in Physics from the University of Sussex in Britain, and joined the IBM Research Division in 1985.  His former research was on applications of nuclear methods in materials science.  Since 1992 he has worked on a number of projects related to the dissemination and use of online information.  His current interests include technology transfer in industrial R&D, and the development of technologies useful for unstructured information management and knowledge management.


Some building blocks, or arrangements of building blocks, recur in knowledge management applications, and can be described by using the concept of patterns. In this talk, some of the patterns that are most affected by security and privacy constraints will be identified, and the impact of those constraints on the systems that implement the patterns will be discussed.




Ravi Sandhu is Professor of Information Security and Assurance and Director of the Laboratory for Information Security Technology (www.list.gmu.edu) at George Mason University.  He is a leading authority on access control, authorization and authentication models and protocols, and is especially known for his seminal and highly influential work in role-based access control.  He is a Fellow of the ACM and a Fellow of IEEE.  He has published over 150 technical papers on computer security in refereed journals, conference proceedings and books.  He founded the ACM Transactions on Information and Systems Security (TISSEC) in 1997 and served as editor-in-chief until 2004.  He served as Chairman of ACM's Special Interest Group on Security Audit and Control (SIGSAC) from 1995 to 2003, and founded and led the ACM Conference on Computer and Communications Security (CCS) and the ACM Symposium on Access Control Models and Technologies (SACMAT) to high reputation and prestige.  Most recently he founded the IEEE Workshop on Pervasive Computing Security (PERSEC) in 2004.

Ravi’s work in role-based access control (RBAC) began with the seminal RBAC96 model which became the basis for the NIST and ANSI Standard RBAC model announced in 2004.  It is expected to soon become an ISO standard.  He has supervised six PhD dissertations on various extensions, enhancement and elaborations of RBAC.  While continuing to pursue RBAC research Ravi and his team have recently developed a new notion of Usage Control (UCON) to provide foundations for the next generation of access control systems.  The first PhD dissertation in this new arena appeared in 2003.  UCON unifies traditional access control, digital rights management, trust management and other similar proposals in a coherent framework based on authorizations, obligations and conditions.  Other current research projects include models for dissemination control (DCON), models and architectures for secure distributed systems using Intel’s Lagrange technology and technical and business models for secure identity management.  His research has been sponsored by numerous public and private organizations including Intel, NSF, NSA, NRO, NIST, DARPA, ARDA, FAA, IRS, Sandia Laboratories, Naval Research Laboratory, Lockheed Martin, Northrop Grumman, SETA Corporation and Verizon.  He has provided high-level security consulting services to several private and government organizations.  Ravi Sandhu has also served as the principal designer and security architect of an identity management appliance developed by Securivacy.  This appliance was the first product in its class to achieve the coveted FIP 140 level 2 rating by NIST in December 2002.

Ravi Sandhu earned his B.Tech. and M.Tech. degrees in Electrical Engineering from the Indian Institutes of Technology at Bombay and Delhi respectively, and his M.S. and PhD degrees in Computer Science from Rutgers University.


The problems of identity, authorization and trust in cyberspace have been with us for over three decades spanning several generations of computing and network technologies.  Recently, and much more so looking ahead, these problems have become qualitatively more difficult, challenging and essential.  Our basic premise is that real progress on these problems requires radical shifts in our approach.  While technology advances underpin everything else, by themselves they can only provide the machinery and mechanisms to build solutions.  We argue that fundamental advances in our conceptual framework for addressing identity, authorization and trust are necessary for successful deployment of emerging technologies.  The talk presents some recent advances towards this goal, including the speaker’s OM-AM framework of objectives, models, architectures and mechanisms and the Role-Based Access Control (RBAC) and Usage Control (UCON) models developed within this framework.  The talk explores and speculates on the relevance of OM-AM, RBAC, UCON and similar conceptual advances towards emerging technologies.




Andrew Odlyzko is Director of the interdisciplinary Digital Technology Center and an Assistant Vice President for Research at the University of Minnesota.  Prior to assuming that position in 2001, he devoted 26 years to research and research management at Bell Labs and AT&T Labs.  He has written over 150 technical papers and has three patents.  He has managed projects in diverse areas, such as security, formal verification methods, parallel and distributed computation, and auction technology.  In recent years he has also been working on electronic publishing, electronic commerce, and economics of data networks.  All his recent papers as well as further information can be found on his home page at <http://www.dtc.umn.edu/~odlyzko>.


 Many clever technologies have failed in the marketplace.  Often the main problem was not that the technical specifications were not met, but that the initial design did not take into account the economic incentives of the various players, or the ways that people actually use technology.  In general, people (especially the bulk of the population who are not technically savvy) and formal systems coexist only uneasily, and this imposes stringent limitations on the levels of security that can be attained.  On the other hand, this provides other levels of security, not normally considered in technical discussions, that enable our society to function and take advantage of the flawed technical systems.








Shiu-Kai Chin is a Professor in the Department of Electrical Engineering and Computer Science at Syracuse University and the Program Director of Computer Engineering. He is Director of the New York State Center of Advanced Technology in Computer Applications and Software Engineering (CASE) and is one of the fellows in the Systems Assurance Institute (SAI). His research applies mathematical logic to the engineering of highly-assured systems. He and his formal methods research group have created and used engineering design procedures based on higher-order logic and other formal methods to make both hardware and software. Hardware designs that he and his group have made include VLSI circuits fabricated at MOSIS (Metal Oxide Semiconductor Implementation Service) and designs compiled onto field-programmable gate arrays (Xilinx). The software systems that have been made include an implementation of Privacy Enhanced Mail (PEM). PEM is a secure electronic mail protocol (Internet Standard RFC 1421) based on public and private key cryptography. PEM provides privacy, authentication, integrity checking, and non-repudiation.

Professor Chin received the Crouse Hinds Award for Excellence in Education from the L.C. Smith College of Engineering and Computer Science in 1994. In 1997, he was appointed Laura J. and L. Douglas Meredith Professor for Teaching Excellence - Syracuse University's highest teaching award. In 2002, he received the 2001-2002 Chancellor's Citation for Outstanding Contributions to the University's Academic Programs.

Professor Chin is a co-chair of the Tools and Technology Committee of the National Institute of Justice's Electronic Crime Partnership Initiative. He has worked closely with the Information Warfare Branch at the Information Directorate of the Air Force Research Lab in Rome, NY.

He is also on the Boards of several organizations including the Board of Trustees of WCNY Public Radio and Television, the Executive Committee of the Governance Board for the City of Syracuse’s Federal Empowerment Zone,  Community Wide Dialogue of the Central NY InterReligious Council, the Onondaga Citizen's League, the Greater Syracuse Business Development Corporation, the CNY Technology Development Organization, the Computer Forensics Research and Development Center at Utica College, and SU's College of Engineering and Computer Science Advisory Board.


Towards an Interdisciplinary Understanding of Trust and Rights.

Shiu-Kai China Polar Humenna Thumrongsak Kosiyatrakula

Terrell Northrupb Susan Oldera

Stuart Thorsonb

aDepartment of Electrical Engineering and Computer Science

bMaxwell School of Citizenship and Public A.airs

Syracuse University, Syracuse, New York 13244



Key words: trust, interdisciplinary, information sharing, formal methods, social science,

access control



This is an interdisciplinary research e.ort whose objective is to rigorously link concepts of trust in the social sciences to concepts of trust in computer science and engineering. Over the past two years, Thorson & Northrup—International Relations and Political Science, Older & Humenn—Computer Science, and Chin & Kosiyatrakul—Computer Engineering, have been collaborating on a variety of projects that explore notions of trust in complex systems, where the term “trust” is domain or discipline specific, the term “system” is broadly interpreted to include networks

of both humans and computers, and complexity arises out of size, di.erences in culture, unpredictability, and composition. These projects include:

Basic research on concepts of trust: (1) relating concepts of trust as understood by philosophers such as Annette Baier [Bai91] and Thomas Scanlon [Sca90] to notions of trust in computer science (Thorson, Older, Chin), and (2) creating a calculus for reasoning about access control policies, delegation, roles, rights, privileges, and credentials modeled by a modal logic and implemented in a theorem prover [KOHC03, KOC01] (Older, Humenn, Chin, Kosiyatrakul).

Applied research in credentials, access control, and secure information sharing: (1) participating on OASIS XACML1 standards groups (Humenn) and developing a language with formal semantics to evaluate credentials as they pertain to access control decisions [Hum03] (Humenn, Older, Chin), and (2) leading a nascent effort to develop and document organizational policies and practices and deploying secure information-sharing technologies to enable the Syracuse Police Department to pull information electronically from Syracuse University’s Department of Public Safety’s database related to sexual assaults (Thorson, Humenn, Chin).  What we have learned from these collaborations is that establishing and maintaining trust depends on several components. From the social sciences, the following are necessary:

Extended empathy, which our own work suggests is an important heuristic that people use to build trust in complex systems, by extending their understanding of motives and ethics to a group of people they may not know. Harr´e provides the example of “trusting” one’s bank ([Har99], p. 259): “I trust my bank because I believe, without perhaps ever having formulated the thought explicitly, that it is stated by honest and competent people.” 

Partially sponsored by the CASE Center at Syracuse University—a Center for Advanced Technology funded by the NY State O.ce of Science, Technology, and Academic Research (NYSTAR)

1Organization for the Advancement of Structured Information Standards eXetensible Access Control Markup Language, http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=xacml

1 Ecological validity [Bru43], which provides a basis for trust because the systems and protocols used cover all the cases that could be encountered in “real life.”

Delayed accounting, which provides a basis for trust because when parties have earned a particular level of trust, resources do not need to be spent for a full accounting unless warranted by circumstances. This allows resources to be used to further the ends of the parties rather than on verifying their credibility [Bai91].

Credible promises, which provide a basis for trust because successful discharging of promises and obligations is a cornerstone of establishing and maintaining trust [Sca90].

From computer science we have learned the following are also necessary:

Precise descriptions of what authorities are recognized, the scope of each authority, how delegation of authority occurs (i.e., how representative or delegates are chosen), how credentials are interpreted, and how access control decisions are made and accounted for. These provide a basis for trust because access control decisions can be analyzed precisely and designs can be verifed for correctness [ABLP93, LABW92, WABL94].

Protocols whose purposes and interpretations are precisely understood, which support trust through common interpretations that avoid misunderstandings [OC02].

A means for independent veri.cation when necessary, which supports trust by providing a means for accountability, error detection, and error correction when called for [KOHC03].

Our goal is to achieve a more precise understanding of how to rigorously relate notions of trust in political science to notions of trust in computer science.  Doing so will move us towards the ability to .eld information systems that adequately account for the societal contexts in which systems operate. Our hypothesis is that the ability to rigorously relate societal notions of trust will enable policy makers, designers and citizens to better understand if a particular system of computer networks, protocols, human organizations policies and practices is trustworthy.  We are exploring the relationship between computer science notions of trust in the form of rights and access to the larger meaning of rights in political science contexts. Our interest is to discover and develop the means to preserve the intended meaning of rights through several re.nements, from the policy-maker’s view to the engineer’s implementation. Towards this end we are doing the following:

Developing a formal language to: (1) describe and reason about trust and protection of rights based on access control, attributes, credentials, roles, and authority, and (2) describe how decisions are made and whether decisions conform to some policy. The rigorous basis of this work depends on modal logic, Kripke models, and higher-order logic. 

Implementing our language(s) in executable forms (i.e., using declarative programming languages and theorem provers) that (1) enable verification of trust and rights by independent

parties, and (2) allow other groups to build on our work more easily than would be the case in the absence of machine-interpretable theories and semantics. Having executable forms facilitates a wider and faster adoption and adaptation of our results than would be the case otherwise.

Grounding our work in reality by (1) working with local governmental agencies and educational institutions to put into place policies, practices, and technologies to share sensitive information (e.g., sexual assault data, incident reports, personnel records, and emergency response plans) in ways that guarantee rights to privacy and applicable university and law enforcement policies, and (2) working with standards groups on e.orts such as the OASIS XACML and SAML (Security Assertion Markup Language)2 to provide rigorous interpretations of credentials used to make access control decisions based on rights, attributes, and credentials.

Developing a mapping (initially informal) between the social-science vocabulary of rights protection and trust and the computer-science vocabulary of rights protection and trust.

Our secure information sharing experiment with the Syracuse University RAPE Center, Department of Public Safety, and the Syracuse Police Department is providing valuable experience in how human and http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=security



scientific notions of trust must be accounted for in order to enable the sharing of sensitive information willingly in a verifiably trustworthy fashion. We hope to present some of our results more fully in the complete paper.




Bharat Bhargava is a professor of the department of computer sciences with a courtesy appointment in school of electrical & computer engineering at Purdue University. Professor Bhargava is conducting research in security and privacy issues in distributed systems. This involves host authentication and key management, secure routing and dealing with malicious hosts, adaptability to attacks, and experimental studies. Related research is in formalizing evidence, trust, and fraud. Applications in e-commerce and transportation security are being tested in a prototype system. Based on his research in reliability, he is studying vulnerabilities in systems to assess threats to large organizations. He has developed techniques to avoid threats that can lead to operational failures. The research has direct impact on nuclear waste transport, bio-security, disaster management, and homeland security. These ideas and scientific principles are being applied to the building of peer-to-peer systems, cellular assisted mobile ad hoc networks, and to the monitoring of QoS-enabled network domains.  In the 1988 IEEE Data Engineering Conference, he and John Riedl received the best paper award for their work on "A Model for Adaptable Systems for Transaction Processing". Professor Bhargava is a Fellow of the Institute of Electrical and Electronics Engineers and of the Institute of Electronics and Telecommunication Engineers. He has been awarded the charter Gold Core Member distinction by the IEEE Computer Society for his distinguished service.  He received Outstanding Instructor Awards from the Purdue chapter of the ACM in 1996 and 1998. In 1999 he received IEEE Technical Achievement award for a major impact of his decade long contributions to foundations of adaptability in communication and distributed systems. In 2003, he has been inducted in the Purdue's book of great teachers.  He serves on five editorial boards of international journals. He serves the IEEE Computer Society on Technical Achievement award and Fellow committees.  Professor Bhargava is the founder of the IEEE Symposium on Reliable and Distributed Systems, IEEE conference on Digital Library, ACM Conference on Information and Knowledge Management.  His research group consists of nine PhD students and four postdocs. He has several NSF funded projects. In addition, DARPA, IBM, Motorola, and CISCO are providing contracts and gift funds.

More details are at www.cs.purdue.edu/people/bb


Lack of trust, privacy, security, and reliability impedes the dissemination and sharing of private data among distributed entities. The impacted interactions range from simple transactions to the most complex collaborations. Research is needed so that users give up the minimum amount of privacy to gain the level of trust demanded by their applications. Metrics are needed for the assessments of privacy loss and trust gain. This presentation advances science and implementation schemes for building private and trusted systems and applications. The research contributes to cooperative information systems and peer-to-peer collaborations. It integrates ideas from privacy, trust, and information theory in database as well as communication. The fundamental research problems include: formalizing trust and privacy, developing their metrics and tradeoffs, and preserving privacy during data dissemination. The proposed solution can be applied to diverse systems including ad hoc networks, peer-to-peer systems, and the Semantic Web.

The privacy research is motivated by the sensitivity of personal data and business losses due to privacy violations. Laws including the Privacy Act of 1974 and HIPAA of 1996 have been passed to protect the privacy.

The main contributions of the research can be summarized as follows:

  • Dissemination of private data. Data dissemination assures that different individuals or organizations can share their sensitive data without compromising privacy. This will remove barriers in collaborations due to privacy violations and facilitate data sharing [2]. A context-aware data evaporation and destruction mechanism is presented. Private data and metadata, including privacy preferences and policies, are encapsulated into objects. As a private object moves away from its owner into a less and less familiar milieu, it adapts to a new environment, self-destructs when falling in wrong hands, or distorts in an unknown context.
  • Privacy metrics. Two privacy assessment metrics have been developed and employed to study and compare different privacy-preserving mechanisms and methods. The metrics are based on the notion of the anonymity set [4] and on the information-theoretic approaches. The first metric hides the subject within a set from which it cannot be distinguished, while the second metric uses the entropy to measure the uncertainty that one has about the system. The response time and efficiency of the two metrics are examined. Various privacy violator patterns, such as uncovered model, trapping model, and illusive model [1], are being investigated through experimental studies.
  • Privacy and trust tradeoffs. Privacy and trust are in an adversarial relationship. Users in interactions with businesses and institutions face tradeoffs between a loss of their privacy and the corresponding gain of trust by their partners. The tradeoff problem is formalized as follows: choose the set of unrevealed credentials so that the requirements of trust establishment are satisfied with a minimal privacy loss. Probability based and lattice based estimation methods for privacy loss have been developed. The experiments to evaluate their accuracy and effectiveness are presented in scenarios under independent attacks and gang attacks.
  • Trusted routing in wireless networks. The safety of a communication in a mobile ad hoc network depends on the proper choice of a sequence of nodes used to reach the destination. The degree of trust is adopted to estimate the risk of selecting a node. Trust information is propagated and routes are discovered according to specific requirements. Sending packets through trusted routes that only involve trustworthy nodes will decrease the probability of malicious attacks and information leakage. Algorithms are being developed to assess the trustworthiness of a route based on information of nodes. Experiments are conducted to study the integration of security mechanisms such as authentication, encryption/decryption, and filtering to defend against malicious attacks.
  • Examples of experimental studies. The experiments on the data evaporation and destruction mechanism validate and evaluate the cost, efficiency, and impacts on the dissemination procedures. They are combined with the TERA (Trust Enhanced Role Assignment) [3] system to assess the usability of the evaluator of trust gain and privacy loss.

A prototype called PRETTY (Private and Trusted Systems) has been built for the measurements of trust and privacy during interactions. It determines whether a user is authorized for an operation based on the policies, the credentials, and the level of her trust. The trust level is dynamically updated with her behaviors. The PRETTY system serves as a platform for experimental studies to simulate violators and users with different levels of trust.

During the design and development of the proposed privacy solution, we consider its adoptions in the emerging applications such as wireless networks and e-commerce environments. A position distortion mechanism to protect the privacy in LBRS (Location Based Routing and Services) has been proposed. Two distortion methods: time based and grid based, are examined. The solution can also be applied to protect the privacy in an electronic supply chain management system. 

The proposed mechanism can be applied to more secure critical environments. It can provide a secure information sharing platform for homeland security, damage control, and disaster recovery among the agencies such as NSA and FBI. A pervasive and trusted wireless communication system is useful for Department of Defense. It provides a high degree of responsiveness, visibility, and accessibility of information to the commanders and soldiers. The trust based routing and intruder identification mechanisms mitigate the threats of the attacks and improve the security level.

In addition to the contribution to the development of trustworthy and privacy-preserving systems, the research has a significant impact on education as well. Through the CERIAS security center at Purdue, specific actions are planned to train U.S. professionals in privacy and security methods and developments. The specialized training/expertise sharing will help to retain jobs in the United States and avoid outsourcing them overseas. This education and training will take place by involving the Historically Black Institutions, underserved communities, and other low-income groups. The ultimate mission is to provide general knowledge and specialized tutorials in order to enhance human resources.

The research is supported by NSF IIS, NSF ANI, DARPA, IBM, and CISCO. More information is available at www.cs.purdue.edu/people/bb.



[1] “Fraud formalization and detection”, B. Bhargava, Y. Zhong, and Y. Lu, in the proceedings of DaWak 2003.

[2] “Trust, privacy, and security”, B. Bhargava, C. Farkas, L. Lilien, and F. Makedon, Summary of a Workshop Breakout Session at the NSF Information and Data Management (IDM) workshop, 2003.

[3] “TERA: An Authorization Framework Based on Uncertain Evidence and Dynamic Trust”, Y. Zhong, Y. Lu, and B. Bhargava, submitted to IEEE Transactions on Knowledge and Data Engineering.

[4] “Achieving k-anonymity privacy protection using generalization and suppression”, L. Sweeney, International Journal on Uncertainty, Fuzziness, and Knowledge-based Systems, 10(5):571-578, 2002.









Johannes Gehrke is an Assistant Professor in the Department of Computer Science at Cornell University and a Faculty Associate Director of the Cornell Theory Center. He obtained his Ph.D. in computer science from the University of Wisconsin-Madison in 1999. Johannes' research interests are in the areas of data mining, data stream processing, distributed data management for sensor networks and peer-to-peer networks, and applications of database and data mining technology to the sciences. Johannes has received a National Science Foundation Career Award, an Arthur P. Sloan Fellowship, an IBM Faculty Award, the Cornell College of Engineering James and Mary Tien Excellence in Teaching Award, and the Cornell University Provost's Award for Distinguished Scholarship. He is serving as Program co-Chair of the 2004 ACM SIGKDD Conference. More information can be found at http://www.cs.cornell.edu/johannes.


Consider the following problem: Several parties would like to jointly a mining operation over their private databases.  One of the parties should learn the result of the operation, but we would like to limit any extra disclosure of the private data of the parties.  It is possible to implement protocols for this problem using standard secure multi-party computation techniques that are based on {\em computational} hardness assumptions. However, with today's techniques the resulting algorithms do not scale to large databases.  In this talk, I will argue for {\em statistical} privacy guarantees.  Statistical privacy allows to prove probabilistic, rather than computational, guarantees on the amount of information that is disclosed by a protocol.  By relying on statistical privacy, we can make our algorithms highly efficient, and make them scale to very large databases. We will demonstrate that statistical privacy is a mathematically solid and yet flexible tool for demonstrating privacy in a wide variety of database- and data mining-related applications.


This talk draws on joint work with Rakesh Agrawal, Alexandre Evfimievski, and Ramakrishnan Srikant.












Copyright 2007 © CEISARE | Home | Site Map | Contact Info | Privacy