The Architecture of Identity:
Embedding Privacy in Market Institutions

Philip E. Agre
Department of Information Studies
University of California, Los Angeles
Los Angeles, California 90095-1520
USA

pagre@ucla.edu
http://polaris.gseis.ucla.edu/pagre/

Information, Communication and Society 2(1), 1999, pages 1-25.

Please do not quote from this version, which may differ slightly from the version that appears in print.

8200 words.

 

Abstract. As the Internet becomes integrated into the institutional world around it, attention has increasingly been drawn to the diverse ways in which information technologies mediate human relationships. And as an increasingly commercial Internet has been employed to capture personally identifiable information, privacy concerns have intensified. To analyze these matters more systematically, this article considers the ideas about human identity that have been implicit in the development of economics and computer science. The two fields have evolved along parallel tracks, starting with an assumption of perfect transparency and moving toward a more sophisticated appreciation of individuals' private informational states. Progress in the analysis and resolution of privacy problems will require that this evolution be taken seriously and continued.

 

1 Introduction

A minor industry is trying to give intellectual shape to the widespread intuition that information technology is bringing about a revolution in human affairs. Even to call it a revolution, of course, is already to have a theory; the social changes that accompany information technology are likened to violent, discontinuous changes in a political order, or to epistemological discontinuities in science. One template for understanding the changes is the industrial revolution, with its extraordinary increases in productivity. The problem with this approach, notoriously, is that little evidence supports it: the aggregate productivity increases that the analogy would predict are nowhere to be seen (Landauer 1995, Madrick 1998; cf Brynjolfsson and Hitt 1998). Another approach sees the Internet writing its decentralized architecture onto the economic and political workings of the global society. This approach, too, conflicts with the evidence, as networked computing facilitates the most dramatic period of industrial concentration in human history.

Yet for all the cold water, revolution is still somehow in the air; we are still challenged to make sense of changes that, whatever their eventual magnitude may be, are strikingly pervasive. To understand them, we need to focus on another of the implications of this talk of revolution: the idea that human relationships, and perhaps human nature itself, will be transformed. Computers increasingly mediate human relationships, this argument goes, and so therefore the computers and the relationships, if they change, will necessarily change together. Revolutions, of course, have frequently been held to be impossible precisely because human nature does not change overnight. Perhaps human nature does not change at all past a certain point, and perhaps as a consequence the possibilities for computer-induced revolution are limited as well. I cannot evaluate these suggestions, but I can offer some account of what they mean.

No simple analysis of these matters will suffice. It is no good to infer changes in human life simply from the workings of machines; such an approach is inevitably reductionistic and can never do justice to the complexity of human life (Kling 1991). Machines cannot be understood as neutral tools, given the immense effort that organizations put into inscribing their own contingent selves into the machines' workings (George and King 1991). Nor can they be understood as mere epiphenomena of larger social processes, given the immense creativity that user communities put into appropriating them (Feenberg 1995).

To understand the place of information technology in society, we need to take an institutional approach. Institutions are the persistent structures of social life: social roles, legal systems, linguistic forms, technical standards, and all of the other components of the playing field upon which human relationships are conducted in a given society (Commons 1924, Goodin 1996, March and Olsen 1989, North 1990, Powell and DiMaggio 1991). Central to the institutional approach is an analysis of the relationship between institutions and people, and between institutions and information technology (King 1996, Kling and Iacono 1989). Institutions are not external to human affairs; they do not regulate our lives simply by reaching in to intervene. To the contrary, institutions go a long way toward defining us. To see this, consider Austin's (1962) analysis of speech acts. According to Austin, a given utterance does not count as a wedding vow or a jury verdict simply for arranging the correct words in the correct order. These speech acts are not felicitous unless certain institutional conditions obtain, and their result is to create new institutional facts in the world. The institutions, in other words, are constitutive of the acts themselves. The person who performs those acts does so in a particular institutional capacity, as the occupant and performer of a particular institutional role, in relationship to other individuals who are successfully occupying and performing complementary roles. Much of our lives are spent enacting institutional roles, and much of our own understandings of ourselves, and most of others' understandings of us, are bound up with those roles. That is not to say that we are puppets dangling on institutional strings. It is, however, to say that we have only a partial awareness of the depths to which our institutions define us. Nor is it to say that the institutions can live on without us. To the contrary, every institution is either reproduced or transformed on every moment through the actions of its participants. The relationship between individuals and institutions, in that sense, is reciprocal.

The institutional approach is particularly well-suited to the analysis of privacy issues. Privacy is patently an institutional matter; it pertains to the institutionally organized ability of individuals to negotiate their relationships with others. Philosophical and legal analysis has often identified privacy as a precondition for the development of a coherent self (Post 1989, Schoeman 1984), and privacy issues have come increasingly to the fore as technologies for mediating human relationships have become more pervasive (Clarke 1994).

But the connection between institutions and privacy can also be found at a deeper level: both pertain to the construction of human identity. Information technology and privacy policy as we know them today both evolved under the assumption that computerized records fully and transparently identify the people whose lives they represent. Once those records are created, we have assumed, privacy is a matter of regulating the uses to which the records are put. Recent innovations in cryptography, however, have changed this picture considerably (e.g., Chaum 1985, 1992), and it has become clear that privacy can often be protected at in a more fundamental way by simply not creating individually identifiable information in the first place (Information and Privacy Commissioner and Registratiekamer 1995).

More generally, new cryptographic protocols have created a vast design space. Along one edge of this space lie the traditional technologies for creating personally identifiable records. Along the opposite edge lie technologies of anonymity, for example smart cards purchased with cash, for which it is nearly impossible to infer a user's identity. Between these two extremes lie numerous other possibilities. Digital cash (Chaum 1989), for example, makes it possible for a payer's identity to be traced, but only under specific conditions. Subsequent work (e.g., Maxemchuk 1994) has generalized these methods to the point where the identities of numerous participants to a transaction are protected, while still making it possible to reconstruct particular identities when certain complicated conditions obtain.

For these technologies to fulfill their promise, they must be integrated with the larger institutional world, including business models, regulatory systems, contractual language, and social customs. My purpose here is to prepare the conceptual ground for this integration. I will argue that privacy-enhancing technologies undermine some fundamental assumptions about both technology and economics. I will proceed in several steps, as follows. Section 2 will frame the contemporary policy debate by sketching some of the complexity hidden within the widespread notion that privacy can be regulated by means of markets. Section 3 will introduce the question of human identity more fully by drawing attention to certain features of the natural history of identity in ordinary settings and to the difficulties that surround current institutional practices of identification. Section 4 will recount the evolving place of identity in the history of computer science, focusing particularly on information technologists' long, slow transition from notions of representation based on an almost God-like objectivity to notions based on the perspectives of finite individuals. Section 5 will trace a parallel development in the history of economics, starting with the roots in physics of the neoclassical synthesis and moving toward the more complicated picture in contemporary game-theoretic models. Section 6 will draw these perspectives together by considering the construction of identity in economic institutions. Section 7 will open up the large question of the information-related dynamics by which economic institutions, and particularly their consequences for the construction of identity, are shaped. Section 8 will then conclude by sketching some new directions that these phenomena suggest for privacy policy.

2 The policy agenda

The conceptual basis of contemporary privacy policy originated in the late 1960's (Flaherty 1989). As the countries of northern Europe began building their welfare states, historical memory of the Nazi era led to concerns about centralized state databases. The "data protection" policies that arose in this environment, therefore, have been framed in political terms: data subjects are first and foremost citizens who possess rights, such as the right to correct inaccurate information about themselves in bureaucratic records (Mayer-Schoenberger 1997). These policies also bear the marks of a technological environment characterized by small numbers of large, centralized, and poorly networked mainframe computers. As both the political and technological environments have evolved, the data protection model has retained its relevance because of its abstract nature. Rather than specify an architecture, the data protection model specifies technically neutral principles that can be -- indeed, that must be -- given content in each sector, public and private, and for each information processing technology.

Nonetheless, in recent years many commentators have suggested that the data protection model is out of date. The Clinton administration in particular has been vocal in urging industry to develop market-based self-regulatory mechanisms that achieve the goals of privacy protection without the purported downsides of a formal regulatory apparatus (Bernstein 1997). Indeed, when considered abstractly, the notion of rights to personal information as commodities that can be traded like any others in the marketplace has a compelling inner logic (Laudon 1996, Stigler 1980; cf Hine and Eve 1998, Miller 1969: 1223-1225). Different people value their personal information to different degrees, and many situations will arise in which the most stringent measures for privacy protection will require systems or procedures that are inherently more expensive, given the current state of technology, than those that offer less protection. Whether society should make an infinite investment in privacy protection, or zero investment, or some investment in between these extremes, it is argued, should be left to the same market mechanisms that determine society's investments in building cars and grilling burgers.

The principal challenges to this model are ethical and economic. The ethical arguments insist that the data protection model's political framing of privacy issues is the correct one, and that privacy is indeed fundamentally a right and not something that a person should have to buy at the going rate. Economic arguments point to the special properties of information. Talking of privacy as a commodity may make sense when we compare privacy to physical things like cars and burgers, but privacy rights are in fact special kinds of commodities because of the special economic properties of information (Arrow 1984, Baker 1997). Kang (1998), for example, observes that an individual will have a hard time evaluating whether to pay a company extra to keep her personal information to itself, given that many other companies that may possess similar information will most likely equivocate about their willingness to implement similar protections. Because information can be duplicated without being destroyed, even a single uncontrolled source of personal information can take up the slack for scores of other, more responsible organizations.

The problem here does not pertain to markets as such. Rather, scenarios like Kang's point to the necessity of institutional analysis. Simply speaking of "markets" is not to say much, given the diversity of actual and potential market institutions. Indeed, one way to interpret the data protection model is precisely as a set of ground rules for a market -- ground rules that aim to ensure that markets in privacy rights function correctly. If every firm is compelled to reveal its data-handling practices then obvious problems of asymmetric information in the privacy marketplace are alleviated and consumers can make their own choices.

Nor can the analysis end here. Quite the contrary, these elementary observations open up the complex issue of the role of personal information in the design of institutions generally and market institutions in particular. Market institutions provide contexts in which people come together to transact business, and in doing so they create information whose embodiments and uses may concern the parties just as much as the goods and services that are bought and sold. In order to raise the question of markets in privacy, in other words, we should also investigate the question of privacy in markets.

The question of privacy in markets becomes particularly complicated in the context of privacy-enhancing technologies (Burkert 1997). One principle of data protection suggests that an organization conducting a transaction with an individual should only create and store the information needed for the purpose -- what Kang (1998), in the context of American privacy policy, calls "functionally necessary" information. Traditionally this concept has been construed to minimize the number of data fields that are created and stored within a particular transaction. But it can also be construed to minimize the degree to which the data subject is identified at all. Even if this principle is not legislated as an inherent property of market institutions, it should not be ruled out from the start. That is, market institutions should at least not make it technologically or administratively impossible a priori for buyers and sellers to contract for a minimal degree of identification. The great difficulty of conceptualizing this guideline in practice may provide us with the full measure of the difficulty of the topic.

3 The question of identity

Identity is central to social and institutional life, and yet the concept of identity is interpreted in quite different ways. A vast intellectual and popular discourse, for example, speaks of ethnic identities, of the role of identity in the construction of nation and community, and of the difference between the identities that are ascribed to one by others and the identities that one fashions for oneself. Identity in this sense is a public, symbolic phenomenon that is located in history, culture, and social structure.

This conception of identity contrasts strikingly with the conceptions that have historically been employed by information technologists. For information technologists, identity is epitomized by proper names. But identities in technical discourse are not the same as names: they are somehow purer, more mathematical. The identity of a human being, or for that matter a chair or a company, essentially consists in its being one single thing that is different from other, countable things. In the traditional ontology of computer science, in other words, the identity of a thing is strictly separate from, and prior to, its attributes.

To investigate the actual or potential role of information technology in real institutional life, therefore, our starting question should be something on the order of, "what does it mean to know who someone is?". The question answers itself automatically, of course, if we imagine that everybody has direct access to a realm of Platonic essences -- one for each thing in the world, and particularly each person. The evidence, however, does not favor that theory. Instead, questions of (what philosophers call) definite reference direct our attention to the social practices in which human beings are embedded, and particularly to the social networks and authority relationships through which human beings can make reference to people, places, and things that may be distant from their own experience (Donellan 1966, Kripke 1980). Simply to know a person's name is obviously not to know who that person is, even when the name in question is unique. Yet even a person whose name is very common can be identified reliably if the speaker and hearer share a reflexive orientation to the same institutional context and social network.

We can also know who someone is without knowing their name. A person can be a regular at a restaurant, eliciting greetings and a glass of beer, simply through the recognizability of her personal appearance. Indeed, in the big picture of the natural history of identity, face recognition occupies an important place in the aforementioned spectrum between complete identification and complete anonymity. Human beings can recognize one another's faces with remarkable skill, even if they cannot as readily attach names to them, and yet they find even familiar faces hard to describe. Walking into a restaurant, whether once or a hundred times, does not permit the employees or customers of the restaurant to create a representation of one's face that can readily be indexed into a database and connected to other sources of information. Police artists can make sketches of a suspect's face from witness reports, and members of the public can often recognize a face from sketches published in the newspaper, but making such sketches requires unusual skills and is expensive and fallible. Faces, in other words, have a one-way quality that is at least broadly reminiscent of the conditional traceability of identity in cryptographic protocols such as digital cash. And social customs depend on this quality. Social relationships unfold not through instantaneous access to one another's complete dossiers, but rather through an exchange of information that is incremental and negotiated. In face-to-face contexts, the first information that one reveals is very frequently the appearance of one's face, and the traditional customs for negotiating relationships would collapse if faces could be easily communicated in a uniquely precise way to others, much less connected by automatic means with computerized files.

In an institutional setting, to "know who somebody is" is roughly speaking the ability to get hold of them. Vendors may need to collect debts. Organizations that maintain customer accounts need to be assured that a person who shows up at a customer service point, or who calls on the telephone, is the same person who opened the account. The conventional strategy for establishing identity is, obviously, to create a unique representation of each individual. Each interaction between an individual and an organization then has (at least) two steps: first find out who the person is (that is, establish and authenticate an identifier) and then transact business with him or her (that is, work on the records that are indexed using that identifier). And yet, in actual institutional practice, this strategy has been implemented in a remarkably slapdash manner. Simsion (1994), for example, observes that real systems have tended to employ identifiers such as name-plus-birthdate that are not necessarily unique, and that can even change as an individual changes social status or when errors are discovered (cf Agre 1997c). Many organizations employ identifiers such as the Social Security Number that were never intended for such purposes and are poorly suited for them. What is more, organizations that issue identification mechanisms such as driver's licenses are frequently plagued by corruption (Ellis 1998). As a result, many institutional systems of identification are riddled with opportunities for error and fraud, and organizations often attempt to strengthen their own identification mechanisms by requiring individuals to display identification from other organizations.

In short, institutional mechanisms of identification, particularly in the United States, are undergoing a slow-motion catastrophe. And with the spread of information technologies that can mediate institutional relationships across great distances, the problem is likely to become severe. Approaches to alleviating the problem can be sorted, at least for heuristic purposes, into two broad categories: centralized and decentralized. A centralized system would be a complete, functioning, and unified version of the scheme just described: one single unique identifier for each person, used consistently for every purpose. A decentralized system, by contrast, would resemble much more closely the "pseudonym" scheme devised by Chaum (1990), in which each individual is assigned a distinct identifier by every organization with whom she does business, such that these identifiers are incapable of being linked to one another without the individual's permission.

Although the notion of a "pseudonym" would normally conjure notions of advanced cryptography or other special trickery, in fact something like the decentralized picture follows naturally so long as the various systems are developed independently of one another. Because of this, the institutional mechanisms of identification that we actually have in 1998 fall somewhere between the centralized and decentralized extremes, and their lack of robustness can be understood precisely in terms of the haphazard and accidental nature of this compromise. Organizations are well known to suffer from "islands of automation" within themselves, and in many cases the computers in the manufacturing divisions of different firms are more compatible than are the computers of the manufacturing and finance divisions of the same firm. A minor industry has grown up to patch over these separate spheres of representation (e.g., Brackett 1994), and this industry must contend with the extraordinary inertia induced by legacy systems and the sprawling networks of skills and practices that have usually grown up around them.

The project of privacy protection, in other words, does not begin from a clean slate, or from any other ideal-typical case. To the contrary, the current situation is a historical muddle of some complexity. To move forward, we need to comprehend the current situation on several levels. To keep the remainder of my discussion within manageable bounds, I will concentrate on only a few of these levels. Specifically, I want to consider some of the intellectual inheritances that shape our understandings of these problems. I will argue that those inheritances are themselves internally complex, and that only a proper historical analysis of them will make them useful in guiding the future evolution of both institutions and technology.

4 Technical evolution

Conventional histories of computing often focus their attention on the machinery itself and on the mathematical basis of logical design and programming. Important as this level of analysis is, we cannot comprehend the institutional history of computing without also attending to the specifically institutional ideas of its inventors. Schaffer (1994), for example, recounts the role of religious ideas and political economy in Babbage's understanding of the computer. Babbage followed in a long line of engineers by understanding his own work on the analogy of God's creation of the earth (Noble 1997). The factory was a microcosm, and the engineer's job was to impose upon it a perfect rational order, arranging the machinery and activities within it without any regard for the subjectivity of the workers.

The computer epitomized for Babbage this God-like ordering principle, and this conception of the computer set in motion a certain conception -- tacit to be sure, but powerful and pervasive nonetheless -- of the nature of computational representation. The word "representation", let us note, can take on several meanings. In the law, a representation is a social action that carries certain responsibilities, some of which can be enforced legally (Shapo 1974). But a representation can also be a thing: an artifact that has been designed to refer in some systematic way to circumstances in the world. And that is the sense in which computer scientists have understood the representations that they have embodied in their machines. Indeed, although the word "representation" is employed routinely by practitioners of artificial intelligence (Agre 1997a), practicing mainstream computer professionals tend to use words such as "file" and "record" that name the representational artifacts of traditional institutional practices.

For Babbage and his descendents, however, the purpose of computerized representations was not simply to replicate the practices of bureaucracies but transparently to mirror the world (Agre 1997c). Even though the contemporary discourse of computing has secularized much of Babbage's theology, there remains the assumption, or ideal, that computers take up a God's-eye view of the world (cf. Edwards 1996, Haraway 1991, Hayek 1963). This approach to representation made a certain sense in the military and industrial settings in which computing was first developed, where power relations did subject all activities to omniscient monitoring. Privacy problems arise precisely when this model is transferred into settings where such relations of representation are taken less for granted.

This, however, is not the end of the story. Computer technology has obviously evolved a great deal since Babbage's day, and even since the days when the practices of organizational computing first took shape at IBM. Among the many lines of evolution, the ones that are relevant here are the ones that begin to recognize the perspective of individual human beings. The conception of individual human beings that is implicit in the conventional practices of computer systems design, in other words, is changing, and yet much of that change has escaped notice. I have already mentioned one area of innovation that has not escaped notice -- the emerging generation of cryptographic protocols that permit designers to underwrite a remarkably wide range of informational relationships among persons. More recently these developments have been formalized into architectural frameworks that promote the routinized design of such schemes (Blaze, Feigenbaum, Resnick, and Strauss 1997; Roscheisen and Winograd 1996; Stefik 1997).

Important as these developments are, they also depend upon a more fundamental shift that has been little recognized. Mainframe computers presupposed (and still do) that they are embedded in an institutional framework in which somebody authenticates the identities of users before issuing accounts to them. The machinery itself, of course, does not obligate the organization that owns it to establish the absolute, Platonic, once-and-for-all identity of every user, whatever that would mean. Nonetheless, the designers of timesharing operating systems have assumed that they are dealing with multiple users, and that one of their central tasks is to distinguish clearly between these users and to build barriers that prevent the users from interfering with one another (Agre 1998b). A mainframe may not exactly know who its users are, but it presupposes that somebody has done enough work to tell the users apart.

Personal computers are designed on no such assumption. The notion of distinct users is not central to the ontology upon which personal computer operating systems are designed, and those personal computers that do distinguish among users do so superficially, perhaps through a password protocol in a screen saver. Your personal computer truly does not know who you are. The Internet, for its part, was first pioneered during the mainframe era. The early Internet derived its security largely through social mechanisms -- by peer pressure within the small world of Internet users and by the institutions that selected people to be users in the first place -- and it derived its technical means of security largely from the timesharing operating systems through which its users gained access to it. Yet that all changed with the introduction of personal computers. Individuals on personal computers could gain access to the Internet without logging in to anything, and the concept of logging in to the Internet itself did not exist. The most widely used Internet software packages arose in this setting, and it is this historical circumstance that explains, in sharp contrast to the picture of traditional timesharing systems, the remarkably poor facilities that the Internet provides to enable people to create boundaries for themselves. Whereas the cultural norms and cognitive practicalities of face-to-face interaction make it possible to negotiate incremental access to oneself, Internet users find themselves inadequately equipped to defend themselves against forgery, spam, and other aggressively antisocial practices.

The fundamental point is this: whereas mainframe operating systems represent their users, personal computers do not. Lest personal computers seem too strange as a result of this difference, however, we should recognize that the mainframes are the exception. Personal computers need not identify their users because the continuity of a user's relationship to a personal computer is provided for by the brute fact of her or her physical access to the machine. A personal computer does not understand its user as "John" or "Mary", any more than a car or an electric razor does, but rather as something more like "the person who is using me". The designer may well, of course, have an elaborate story in mind about the attributes and relationships of this person (Sharrock and Anderson 1994), and this story may well have been inscribed in the device itself (Akrich 1992), but the user himself or herself is still characterized in indexical terms through a certain definite description ("the person who ..."). The concept of indexicality is derived from linguistics, where it refers to any aspect of grammar whose referent depends on the circumstances in which it is used (Hanks 1990). The crucial point, in Smith's (1996) provocative formulation, is that the laws of physics are themselves indexical: they depend not on particular places and times but on "here" and "there" and "now" and "then".

Nor is this trend toward indexicality in representation confined to the tacit workings of personal computers. Research in artificial intelligence has long presupposed that human beings and other "intelligent agents" employ representations that resemble the "view from nowhere" (cf Porter 1994) that AI people call a "world model". And yet human beings, like cats and robots, are finite; they have bodies and are located physically and epistemically in space. For this reason and others, attempts to build intelligent machinery have been compelled over time toward a more explicitly indexical understanding of representation (Agre 1997a, Lesperance and Levesque 1995). The point is not exactly that anonymity is the natural order of things. We can, after all, recognize one another's faces. What is unnatural, so to speak, is precisely the attempt to establish a God-like representational perspective that gives all things their true names.

5 Economic evolution

Although many of the details of this evolution are specific to the institutions and practices of computing, nonetheless economics has traveled a remarkably analogous road. As Mirowski (1989) has observed, the neoclassical synthesis in economics that got its start with Cournot and that emerged full-blown at mid-century (Samuelson 1947) was, at its origin, explicitly modeled in both its methodology and its mathematics upon the notion of equilibrium in the classical theory of mechanics in physics. Thus, even though neoclassical economics is commonly described as an individualistic theory of human beings, it is equally important to understand the sense in which it is nothing of the sort. Central to Cournot's methods was the notion that everybody's utilities can be maximized at the same time as so many simultaneous equations. The famous idealizations of neoclassical economics -- perfect information, zero transaction costs, and so on -- all originate as the conditions under which this generalized and universal notion of equilibrium can be made to work out. Human beings in this scheme have no privacy, indeed no internal states at all, and they have no room to engage in strategic behavior. Their market-relevant attributes are perfectly and instantaneously known to everybody, or these attributes are at least taken fully into account in everybody's judgements, and all of the attributes enter into a perpetual, holistic, instant-by-instant reshuffling of the great allocative machinery of the marketplace. Thus Epstein (1993), in his economic analysis of law, can analyze the perfectly functioning market in terms of the allocative choices that would be made by a hypothetical "single owner" of all the world's resources. Posner (1981) and Stigler (1980), likewise, have employed the neoclassical economic framework in characterizing privacy as harmful to the efficient functioning of the market.

It is only with subsequent developments that mainstream economics begins to construct a nontrivial picture of the individual human person. Whereas economists of a collectivist orientation have labored to bring the supposedly atomized individual of neoclassical theory into some kind of social relationship with others, the actual development of neoclassical theory has been close to the opposite of this: a gradual paring away, like a sculptor chipping at stone, of the many idealizations that merge the individual into one vast collective consciousness. These developments take place largely in the context of the economics of information. It has long been understood, for example, that information has a paradoxical place in neoclassical theory: markets clear in a general way only if market-relevant information is perfect and therefore free, but the necessary information will only be produced and distributed in sufficient quantity and quality if it is a commodity like any other. Stigler (1961), for example, pioneered the analysis of search costs within a neoclassical framework, and Grossman and Stiglitz (1980) argued on formal grounds that the economy can at best alternate between information that is free and information that is adequate. Boyle (1996) has developed the point philosophically.

These difficulties, of course, do not invalidate the neoclassical model. The neoclassical idealizations provide a formally attractive starting point from which a multitude of divergences can be explored -- each with an attendant increase, often substantial, in the complexity of the model. (In this, too, the neoclassical model is analogous to artificial intelligence: both fields attempt to formalize rational behavior, and AI as well has its canonical model, the formally attractive but frequently impractical "Good Old-Fashioned AI" architecture of world models and planning from which numerous divergences have been explored.) Institutional changes are also viewed as approximating the world ever more closely to the neoclassical ideal (North 1990, Williamson 1985). In other work (Agre 1997b), I have developed the consequences of these ideas, and particularly the information-centered model of Casson (1994), for the analysis of privacy problems. Briefly put, the optimistic prediction of the model is that markets with ever-lower information costs can become ever more efficient through the ever-greater pervasiveness of contractual monitoring, but only to the extent that individual consumers are willing to depart cheaply with their personal information. Whether that optimistic prediction holds true to the reality is a crucial topic for research.

Rather than analyze these models any further, however, let us briefly consider the even clearer emergence of a distinct human person in subsequent developments in mainstream economics. Although first invented fifty years ago, game theory (Von Neumann and Morgenstern 1947) has lately been developed into a general tool for the analysis of problems of asymmetric information among the participants in an institutionalized relationship (Hillier 1997, Rasmusen 1994). Phlips (1988: 8-9), for example, makes the useful distinction between imperfect information, which arises within an established framework of rules when participants are uncertain about one another's past and present behavior, and incomplete information, which arises when the participants do not know some of the elements (payoffs, available strategies, number of other players) that define the game itself. Game theory still employs a holistic form of analysis derived (by Nash) from Cournot, and, as Nermuth (1982) among others has remarked, the mathematics of strategic interaction can become extraordinarily complex. Nonetheless, with this kind of analysis of asymmetric information and the varieties of strategic behavior that asymmetric information permits, one finally begins to get a sense of the private realm of the individual (Phlips 1988: 2-3). This sense of the private individual is also found in the study of incentives that govern the revelation of information and the economic institutions (prototypically the Vickrey auction) that encourage individuals to make accurate representations of their preferences (see Campbell 1995).

6 Institutions and identity

In sketching this history, I hope to have indicated the remarkable analogies between the intellectual development of both computer science and economics. Each has started from a transparent, God's-eye conception of human beings that subsumes them into a wholly rational universal system, and moved toward an increasing appreciation that human beings are in fact finite, located, and embodied. This progression is, one likes to think, science in action, the truth coming out. If so, it is a truth that challenges some assumptions of both technical design and public policy, both of whose public articulations remain very much rooted in the older, obsolescent conceptions of their respective fields.

In order to become practically useful, however, this emerging understanding must be embedded in an institutional context. This is an intellectual challenge, given that mainstream computer science and economics have also been similar in their foreshortened understandings of the institutional organization of technical and economic practice in the real world. Economics, for example, was long content to understand markets in terms of Walras' fictional auctioneer who could be assumed to bring buyers and sellers together at agreed prices. A "market" simply consisted of some buyers, some sellers, and some goods, and economic theory could proceed with astoundingly little curiosity about the actual material workings of marketplaces, job interviews, auto dealers, and the like. Of course, as Hodgson (1988: 182) points out, an auction is itself an institution; the institutional organization of markets was always tacitly presupposed in the theory. Now, however, it is emerging much more explicitly as a topic of research. Much of the recent economic concern with institutions, as I have already noted, has been driven by a teleology whereby technological advance progressively reduces friction in the marketplace (the term, now celebrated due to its use by Gates (1995), appears to originate with Williamson (1985)) and thereby causes the economy to approximate the neoclassical ideal ever more closely. Institutions are, on this view, a passing phase, albeit an important one.

Another version of this teleology is the widespread notion of disintermediation, which holds that the principal effect of information technology on institutions of all types is the cutting-out of intermediaries (cf Halper 1998). Although wildly oversimplified, not least because the same mechanisms that make it easier to cut out intermediaries can make the intermediaries more efficient as well, this notion has nonetheless led -- at last, though better late than never -- to principled and general analyses of the role of intermediaries in actual markets (Casson 1997; Spulber 1996, 1998).

The public prominence of information technology, then, has made it considerably harder to neglect the institutional organization of markets. Picot, Ripperger, and Wolff (1996), for example, following in the tradition of Coase's (1937) analysis of the effects of the telegraph and telephone on the boundaries of the firm, have argued that information and communication technologies cause the boundaries of the firm to fade altogether, as firms develop more complex linkages among themselves and begin to adopt market-like mechanisms within themselves. Much more work is needed, however, not least conceptual work that recovers a full awareness of the many aspects of market institutions to which mainstream economics has been oblivious (Granovetter 1992).

Once the need for an analysis of institutions has been realized, it becomes possible to develop a sophisticated analysis of the nature and role of identity. Identity becomes an issue in markets, for example, when personal reputations serve as proxies for information about the quality of goods, or when contracts call for particular parties to perform their part despite having already obtained the consideration they wanted. In economic models of markets, however, the identities of the participants are almost invariably formalized in a primitive way. Buyers and sellers, for example, might be numbered from 1 to n, and all parties will be assumed to know the numbers and possess the ongoing ability to assign the right numbers to the right players. This formalization is often wrong and invariably incomplete. So long as the formalism is restricted to a single market -- meaning, roughly, the market in a single class of goods -- many privacy issues -- especially those pertaining to the secondary use of information -- do not arise. Even within attempts to model strategic behavior in a single market, individual players' actions can reveal information that provides others with an advantage in subsequent exchanges, thereby leading to a space of trade-offs between short-term optimality and long-term strategy whose Nash equilibrium can be exquisitely complex to model. Already, however, it is assumed -- unnecessarily from a technical perspective, at least in many settings -- that players can identify one another from round to round.

Market institutions can become progressively more complex as they take into account the forms of identity that market participants can construct with the aid of privacy-enhancing technologies. Consider, for example, the potential roles for cryptographic protocols that produce "credentials" (Chaum 1990). Recall that the traditional method for establishing the various properties of a customer to a transaction has been to identify the customer and then look up the records associated with that identity. Although conceptually convenient, this two-step process collects more information than is required to achieve its purpose. A more direct mechanism would employ indexical representations -- not "this is John and John is 21 so John can enter the bar", but rather "this person is 21 so he can enter the bar". And this is what credentials provide -- assuming, of course, that the necessary institutional framework is in operation.

More sophisticated institutional analyses of identity are possible. Poster (1990), for example, outlines a theory of databases as what Foucault calls "discourses". For Foucault, a discourse is not simply a linguistic phenomenon but includes the full ensemble of practical arrangements by which individuals are positioned in institutional roles -- teacher and student, judge and jury, doctor and patient, and so on. These arrangements increasingly involve databases, of course, and Poster argues that we miscomprehend the nature of databases if we simply look at them as patterns of bits in a computer. Instead, he points out, databases derive their usefulness from their position in larger circuits of relationships and interactions among people. This is Poster's version of the proposition, introduced at the outset, that human beings and information technologies are coevolving through the mediation of institutions and the social roles that they define. Information technologists, on this view, are insufficiently aware of the institutional arrangements that actively reconfigure the world to make the necessary capture of data possible in practice (Agre 1994). And neoclassical economists themselves take an unreasonably narrow view of institutions, even market institutions, in their failure to analyze the elaborate connections between the allocative mechanisms of the market and the identity-sustaining mechanisms of other spheres of life. It is here that the two understandings of identity -- the cultural and the technical -- begin to converge into a single complex phenomenon.

Poster's analysis also draws attention to the roles of the body in constructing identity. One such role obviously pertains to the state: among the traditional reasons to identify the parties to a contract is to make it possible to hale them into court if they violate the contract. Identity, in other words, is employed as a means of access to a person's body, and many contracts require an individual to commit to the practical arrangements that may one day make this access possible. With privacy-enhancing technologies, however, it is entirely feasible to gain access to an individual's identity, and thereby to their person, only upon a breach of the contract, and not before.

Once again, we should recognize that this kind of wrangling over the establishment of identity is not the natural way of things. An enormous variety of practical arrangements make it possible for people to conduct complex business anonymously by taking advantage of the properties of bodies. A supermarket checkout aisle, for example, provides physically for queueing and effectively traps each successive customer in a confined space while the transaction is being conducted. The point, of course, is not to imprison anybody in an extreme sense, but rather to provide for the continuity of the indexical relationship between this-customer and this-clerk while the various steps in the transaction are being executed, thereby providing, using a term from computer science, for the atomicity of the transaction. Slips of paper serve much the same purpose: sufficiently unforgeable for a given degree of risk, they provide continuity through the several steps of a transaction such as attending a movie. Even though words of computer memory were originally modeled on the entries in a paper form, many of the security problems with computers derive precisely from their unpaperlike ability to be modified without leaving any eraser marks. Here again cryptographic protocols have begun to recover some like the protections that were already a natural part of the pre-electronic world (Hettinga 1998).

7 Shaping institutions

These remarks may provide some idea of the stakes in the coevolution between market institutions and the construction of identity. Even if we can speak reasonably about privacy as a commodity to be allocated in the market, we must first speak about privacy as an attribute of market institutions. And this, in turn, requires us to speak about the market in marketplaces. Where do market institutions come from, why do they embody the relationships and rules that they do, and how do they evolve from there? The issues become particularly acute as market institutions are increasingly mediated by information technology. Despite information technology's revolutionary reputation, many of its institutional dynamics are in fact obstinately conservative in character. Arrow (1984: 142) observes, for example, that "information creates economies of scale throughout the economy, and therefore, according to well-known principles, causes a departure from the competitive economy" -- a trend, in other words, toward increasing market power on the part of particular players.

As Mansell (1996) points out, market power can manifest itself not simply in excessive prices but also in the shaping of an unlevel playing field in electronic commerce. Market mechanisms are network technologies (Katz and Shapiro 1994) in that the benefit from adopting them depends heavily on the number of others who have also adopted them, and this dynamic may lead to the locking-in of mechanisms that institutionalize bias. Among the important biases of an electronic commerce technology are those that pertain to information extraction. A technology, for example the SET payment protocol (Phillips 1997), can become established through the coalition strategies of interested players, even though the technology inherently extracts and retains far more information about purchasers and their transactions than necessary. Callon (1991) understands these phenomena more broadly as the "irreversibilization" of a heterogeneous set of elements that become interlocked with one another as a new social practice settles into a pattern.

Even before electronic commerce technologies come into the picture, market institutions are already subject to biases from the unequal position of the players, and it would seem important to have a general theory of the process by which these biases are institutionalized. Such biases cannot exist on the neoclassical model, and so the phenomena in question must be sought in reality's departures from that model. One starting-place might be the distinction, observable in almost all real markets, between specialists -- those who spend much of their lives playing their particular role in a given type of transaction -- and generalists -- those who play their own role in that type of transaction only rarely. Someone who sells used cars for a living is a specialist in that topic, whereas almost everyone else is a generalist who purchases many different types of goods including the very occasional used car.

Specialists and generalists, obviously, occupy asymmetric locations in an institution, and this asymmetry entails informational differences. By playing the same role in a long series of substantially analogous interactions, a specialist can learn by degrees how to "work the angles" of the interaction. The specialist may choose the wording in a form contract, design those aspects of the physical environment that determine what a customer can see and when, choose the order in which various items of information are revealed during negotiation, identify phrases and other selling techniques that various types of customers find most persuasive, and so on. Consultants can make a living by moving from one specialist to the next, transferring expertise through a kind of information arbitrage (Bowker 1994). Specialists also have a much greater interest than generalists in associating and organizing among themselves, in building social networks, and in sharing experiences and generally engaging in collective cognition (Agre 1998c). It follows that, even in the context of ordinary contract negotiation, individualistic theories of unequal bargaining (e.g., Cartwright 1991, Trebilcock 1993) are woefully inadequate unless they comprehend these systematic sources of inequality.

Political scientists have understood these phenomena in terms of the collective action problems posed by organized interest groups in a democracy (Olson 1965), but that is only one manifestation of a deeper problem. Something of the shape of the problem is conveyed by Commons' (1924) use of collective bargaining as a model for the analysis of all social institutions. The point of Commons' analysis becomes visible in technological contexts through the dynamics of technical standards-setting. Conflicts over open standards, for example, can be understood as an attempt by information technology customers to engage in collective bargaining with vendors. The ideology of open standards, unclear and shifting as its meanings have often been (Abbate 1995, Cargill 1994), has largely been an attempt by information technology customers to fashion a loose coalition and present a unified front to vendors. Lacking formal organization and established forums for bargaining, the two sides in this negotiation have engaged in complicated strategic behavior that cannot be properly modeled without a full sense of each player's awareness of network effects and their consequences (Agre 1998a, Grindley 1995).

8 Conclusions

The conditions that affect privacy are integral to the construction of market institutions, and should therefore not be regarded as mere products of natural market forces. The principles of data protection, in particular, may provide guidance for institutional design. To the extent that market institutions are embodied in electronic commerce technologies, any intervention that hopes to influence the direction of those technologies needs to be timed correctly; once a standard is entrenched in market institutions, it may be for all practical purposes irreversible. Finally, the promotion of privacy-enhancing technologies should aim to incorporate them into the design of institutions generally and market intermediaries in particular.

* Acknowledgements

This paper was prepared for the Telecommunications Policy Research Conference, Alexandria, Virginia, October 1998. I appreciate the comments of the anonymous referees.

* References

Janet Abbate, "Open systems" and the Internet, Society for Social Studies of Science / Society for History of Technology Conference, Charlottesville, October 1995.

Philip E. Agre, Surveillance and capture: Two models of privacy, The Information Society 10(2), 1994, pages 101-127.

Philip E. Agre, Computation and Human Experience, Cambridge: Cambridge University Press, 1997a.

Philip E. Agre, Introduction, in Philip E. Agre and Marc Rotenberg, eds, Technology and Privacy: The New Landscape, Cambridge: MIT Press, 1997b.

Philip E. Agre, Beyond the mirror world: Privacy and the representational practices of computing, in Philip E. Agre and Marc Rotenberg, eds, Technology and Privacy: The New Landscape, Cambridge: MIT Press, 1997c.

Philip E. Agre, The Internet and public discourse, First Monday 3(3), 1998a.

Philip E. Agre, Yesterday's tomorrow, Times Literary Supplement, 3 July 1998b, pages 3-4.

Philip E. Agre, Designing genres for new media, in Steven G. Jones, CyberSociety 2.0: Revisiting CMC and Community, Newbury Park, CA: Sage, 1998c.

Madeleine Akrich, The de-scription of technical objects, in Wiebe E. Bijker and John Law, eds, Shaping Technology/Building Society: Studies in Sociotechnical Change, Cambridge: MIT Press, 1992.

Kenneth J. Arrow, Collected Papers, Volume 4: The Economics of Information, Cambridge: Harvard University Press, 1984.

J.L. Austin, How to Do Things with Words, Cambridge: Harvard University Press, 1962.

C. Edwin Baker, Giving the audience what it wants, Ohio State Law Review 58(2), 1997, pages 311-417.

Nina Bernstein, Goals clash in shielding privacy, New York Times, 20 October 1997, page A16.

Matt Blaze, Joan Feigenbaum, Paul Resnick, and Martin Strauss, Managing trust in an information-labeling system, European Transactions on Telecommunications 8, 1997, pages 491-501.

Geoffrey C. Bowker, Science on the Run: Information Management and Industrial Geophysics at Schlumberger, 1920-1940, Cambridge: MIT Press, 1994.

James Boyle, Shamans, Software, and Spleens: Law and the Construction of the Information Society, Cambridge: Harvard University Press, 1996.

Michael H. Brackett, Data Sharing: Using a Common Data Architecture, Wiley, 1994.

Erik Brynjolfsson and Lorin M. Hitt, Beyond the productivity paradox: Computers are the catalyst for bigger changes, Communications of the ACM 41(8), 1998, pages 49-55.

Herbert Burkert, Privacy-enhancing technologies: Typology, critique, vision, in Philip E. Agre and Marc Rotenberg, eds, Technology and Privacy: The New Landscape, Cambridge: MIT Press, 1997.

Michel Callon, Techno-economic networks and irreversibility, in John Law, ed, A Sociology of Monsters: Essays on Power, Technology and Domination, London: Routledge, 1991.

Donald E. Campbell, Incentives: Motivation and the Economics of Information, Cambridge: Cambridge University Press, 1995.

Carl Cargill, Evolution and revolution in open systems, StandardView 2(1), 1994, pages 3-13.

John Cartwright, Unequal Bargaining: A Study of Vitiating Factors in the Formation of Contracts, Oxford: Oxford University Press, 1991.

Mark Casson, Economic perspectives on business information, in Lisa Bud-Frierman, ed, Information Acumen: The Understanding and Use of Knowledge in Modern Business, Routledge, 1994.

Mark Casson, Information and Organization: A New Perspective on the Theory of the Firm, Oxford: Oxford University Press, 1997.

David Chaum, Security without identification: Transaction systems to make Big Brother obsolete, Communications of the ACM 28(10), 1985, pages 1030-1044.

David Chaum, Privacy protected payments: Unconditional payer and/or payee untraceability, in D. Chaum and I. Schaumuller-Bichl, eds, Proceedings of Smart Card 2000, North Holland, 1989, pages 69-93.

David Chaum, Showing credentials without identification: Transferring signatures between unconditionally unlinkable pseudonyms, in J. Seberry and J. Pieprzyk, eds, Advances in Cryptology: Auscrypt '90, Berlin: Springer-Verlag, 1990, pages 246-264.

David Chaum, Achieving electronic privacy, Scientific American 267(2), 1992, pages 96-101.

Roger Clarke, The digital persona and its application to data surveillance, Information Society 10(2), 1994, pages 77-92.

Ronald H. Coase, The nature of the firm, Economica NS 4, 1937, pages 385-405.

John R. Commons, Legal Foundations of Capitalism, New York: Macmillan, 1924.

Keith S. Donnellan, Reference and definite descriptions, Philosophical Review 75(3), 1966, pages 281-304.

Paul N. Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America, MIT Press, 1996.

Virginia Ellis, Thriving trade in fake drivers' licenses poses tough problem for DMV, Los Angeles Times, 5 April 1998, pages A1, A26.

Richard A. Epstein, Holdouts, externalities, and the single owner: One more salute to Ronald Coase, Journal of Law and Economics 36, 1993, pages 553-586.

Andrew Feenberg, Alternative Modernity: The Technical Turn in Philosophy and Social Theory, Berkeley: University of California Press, 1995.

David H. Flaherty, Protecting Privacy in Surveillance Societies: The Federal Republic of Germany, Sweden, France, Canada, and the United States, University of North Carolina Press, 1989.

Peter Galison, The ontology of the enemy: Norbert Wiener and the cybernetic vision, Critical Inquiry 21(1), 1994, pages 228-266.

Bill Gates, The Road Ahead, New York: Viking, 1995.

Joey F. George and John L. King, Examining the computing and decentralization debate, Communications of the ACM 34(7), 1991, pages 63-72.

Robert E. Goodin, ed, The Theory of Institutional Design, Cambridge: Cambridge University Press, 1996.

Mark Granovetter, Economic action and social structure: The problem of embeddedness, in Mark Granovetter and Richard Swedberg, eds, The Sociology of Economic Life, Boulder: Westview, 1992.

Peter Grindley, Standards Strategy and Policy: Cases and Stories, Oxford: Oxford University Press, 1995.

Sanford J. Grossman and Joseph E. Stiglitz, On the impossibility of informationally efficient markets, American Economic Review 70(3), 1980, pages 393-408.

Mark Halper, Middlemania, Business 2.0 3(7), 1998, pages 45-60.

William F. Hanks, Referential Practice: Language and Lived Space Among the Maya, Chicago: University of Chicago Press, 1990.

Donna J. Haraway, Situated knowledges: The science question in feminism and the privilege of partial perspective, in Simians, Cyborgs, and Women: The Reinvention of Nature, Routledge: New York, 1991.

Friedrich A. Hayek, Individualism and Economic Order, Chicago: University of Chicago Press, 1963.

Robert Hettinga, Call for founders: An email list (and subsequent conferences) on digital bearer settlement, electronic message to the DBS mailing list, March 1998. On file with the author.

Brian Hillier, The Economics of Asymmetric Information, St. Martin's Press, 1997.

Christine Hine and Juliet Eve, Privacy in the marketplace, The Information Society 14(4), 1998, pages 253-262.

Geoffrey M. Hodgson, Economics and Institutions: A Manifesto for a Modern Institutional Economics, Cambridge, UK: Polity Press, 1988.

Information and Privacy Commissioner (Ontario) and Registratiekamer (The Netherlands), Privacy-Enhancing Technologies: The Path to Anonymity (two volumes), Information and Privacy Commissioner (Toronto) and Registratiekamer (Rijswijk), 1995.

Jerry Kang, Information privacy in cyberspace transactions, Stanford Law Review 50(4), 1998, pages 1193-1294.

Michael L. Katz and Carl Shapiro, Systems competition and network effects, Journal of Economic Perspectives 8(2), 1994, pages 93-115.

John L. King, Where is the payoff from computing?, in Rob Kling, ed, Computerization and Controversy: Value Conflicts and Social Choices, second edition, Academic Press, 1996.

Rob Kling and Suzanne Iacono, The institutional character of computerized information systems, Office: Technology and People 5(1), 1989, pages 7-28.

Rob Kling, Computerization and social transformation, Science, Technology, and Human Values 16(3), 1991, pages 342-367.

Jack Knight, Institutions and Social Conflict, Cambridge: Cambridge University Press, 1992.

Saul Kripke, Naming and Necessity, Cambridge, MA: Harvard University Press, 1980.

Thomas K. Landauer, The Trouble with Computers: Usefulness, Usability, and Productivity, Cambridge: MIT Press, 1995.

Kenneth C. Laudon, Markets and privacy, Communications of the ACM 39(9), 1996, pages 92-104.

Yves Lesperance and Hector J. Levesque, Indexical knowledge and robot action: A logical account, Artificial Intelligence 73(1-2), 1995, pages 69-115.

Jeff Madrick, Computers: Waiting for the revolution, New York Review of Books, 26 March 1998, pages 29-33.

Robin Mansell, Designing electronic commerce, in Robin Mansell and Roger Silverstone, eds, Communication by Design: The Politics of Information and Communication Technologies, Oxford: Oxford University Press, 1996.

James G. March and Johan P. Olsen, Rediscovering Institutions: The Organizational Basis of Politics, New York: Free Press, 1989.

Nicholas F. Maxemchuk, Electronic document distribution, ATT Technical Journal 73(5), 1994, pages 73-80.

Viktor Mayer-Schoenberger, Generational development of Data Protection in Europe, in Philip E. Agre and Marc Rotenberg, eds, Technology and Privacy: The New Landscape, Cambridge: MIT Press, 1997.

Arthur R. Miller, Personal privacy in the computer age: The challenge of new technology in an information-centered society, Michigan Law Review 67(6), 1969, pages 1089-1246.

Philip Mirowski, More Heat than Light: Economics as Social Physics, Physics as Nature's Economics, Cambridge: Cambridge University Press, 1989.

Manfred Nermuth, Information Structures in Economics: Studies in the Theory of Markets with Imperfect Information, Berlin: Springer-Verlag, 1982.

David Noble, The Religion of Technology, New York: Knopf, 1997.

Douglass C. North, Institutions, Institutional Change, and Economic Performance, Cambridge: Cambridge University Press, 1990.

Mancur Olson, Jr., The Logic of Collective Action: Public Goods and the Theory of Groups, Cambridge: Harvard University Press, 1965.

David J. Phillips, Cryptography, secrets, and the structuring of trust, in Philip E. Agre and Marc Rotenberg, eds, Technology and Privacy: The New Landscape, Cambridge: MIT Press, 1997.

Louis Phlips, The Economics of Imperfect Information, Cambridge: Cambridge University Press, 1988.

Arnold Picot, Tanja Ripperger, and Birgitta Wolff, The fading boundaries of the firm: The role of information and communication technology, Journal of Institutional and Theoretical Economics 152(1), 1996, pages 65-79.

Theodore M. Porter, Information, power and the view from nowhere, in Lisa Bud-Frierman, ed, Information Acumen: The Understanding and Use of Knowledge in Modern Business, London: Routledge, 1994.

Richard A. Posner, The Economics of Justice, Cambridge: Harvard University Press, 1981.

Robert C. Post, The social foundations of privacy: Community and self in the common law tort, California Law Review 77(5), 1989, pages 957-1010.

Mark Poster, Databases as discourse, or, Electronic interpellations, in The Mode of Information: Poststructuralism and Social Context, Cambridge: Polity Press, 1990.

Walter W. Powell and Paul J. DiMaggio, eds, The New Institutionalism in Organizational Analysis, Chicago: University of Chicago Press, 1991.

Eric Rasmusen, Games and Information: An Introduction to Game Theory, second edition, Cambridge, MA: Blackwell, 1994.

Martin Roscheisen and Terry Winograd, A communication agreement framework for access/action control, Proceedings of the IEEE Symposium on Research in Security and Privacy, Oakland, 1996.

Paul A. Samuelson, Foundations of Economic Analysis, Cambridge: Harvard University Press, 1947.

Simon Schaffer, Babbage's intelligence: Calculating engines and the factory system, Critical Inquiry 21(1), 1994, pages 201-228.

Ferdinand Schoeman, ed, Philosophical Dimensions of Privacy: An Anthology, Cambridge: Cambridge University Press, 1984.

Marshall S. Shapo, A representational theory of consumer protection: Doctrine, function and legal liability for product disappointment, Virginia Law Review 60(7), 1974, pages 1109-1386.

Wes Sharrock and Bob Anderson, The user as a scenic feature of the design space, Design Studies 15(1), 1994, pages 5-18.

Graeme C. Simsion, Data Modeling Essentials: Analysis, Design, and Innovation, New York: Van Nostrand Reinhold, 1994.

Brian C. Smith, On the Origin of Objects, Cambridge, MA: MIT Press, 1996.

Daniel F. Spulber, Market microstructure and intermediation, Journal of Economic Perspectives 10(3), 1996, pages 135-152.

Daniel F. Spulber, The Market Makers: How Leading Companies Create and Win Markets, New York: McGraw-Hill, 1998.

Mark Stefik, Trusted systems, Scientific American 276(3), 1997, pages 78-81.

George J. Stigler, The economics of information, Journal of Political Economy 69(3), 1961, pages 213-225.

George J. Stigler, An introduction to privacy in economics and politics, Journal of Legal Studies 9(4), 1980, pages 623-644.

Michael J. Trebilcock, The Limits of Freedom of Contract, Cambridge: Harvard University Press, 1993.

John Von Neumann, and Oskar Morgenstern, Theory of Games and Economic Behavior, Princeton: Princeton University Press, 1947.

Oliver E. Williamson, The Economic Institutions of Capitalism: Firms, Markets, Relational Contracting, New York: Free Press, 1985.