Internet Research: For and Against

Philip E. Agre
Department of Information Studies
University of California, Los Angeles
Los Angeles, California 90095-1520
USA

pagre@ucla.edu
http://polaris.gseis.ucla.edu/pagre/

This is a chapter in Mia Consalvo, Nancy Baym, Jeremy Hunsinger, Klaus Bruhn Jensen, John Logie, Monica Murero, and Leslie Regan Shade, eds, Internet Research Annual, Volume 1: Selected Papers from the Association of Internet Researchers Conferences 2000-2002, New York: Peter Lang, 2004.

4200 words.

 

Abstract: Given that the Internet is an engineered system like any other, why should we distinguish "Internet research" from any other study of technology? One answer is that computers are distinctive in their direct and systematic relationship to language. Another is that the Internet, through its layered architecture, is highly appropriable. Even so, the Internet does not cause a revolution or define a wholly separate "cyber" sphere. Instead, due to its distinctive qualities, it participates in somewhat distinctive ways in the ongoing political life of institutions.

 

When I was going to graduate school at MIT, most of the professors around me were embarrassed to be called computer scientists. Their complaint was this: why should there be a separate field of computer science, any more than there is a separate field of refrigerator science? In their view, computers were just complex physical artifacts like any others. Following Simon (1969), they argued that design principles such as modularity are not specific to software, but are properties of the universe in general. The structures that evolve are modular because modular structures are more stable than others. Computers were simply a special case of these universal laws.

This perspective on computer science differs from the view in most textbooks. In the textbooks, a computer is a device that can compute any function that any particular Turing machine can compute. The professors at MIT would have none of this. Of course the mathematics of computability was interesting, but it reflected only one corner of a much larger space of inquiry. What they found most interesting was not the mapping from single inputs to single outputs but the relationship between the structure of a computational device and the organization of the computational process that arose when the device was set running.

The physical realization of computational processes was, however, only half the story. The other half lay in the analysis of problem domains. This is a profound aspect of computer work -- and all engineering work -- that is almost invisible to outsiders. Computers are general-purpose machines in that they can be applied to problems in any sphere. A system designer might work on an accounting application in the morning and an astronomical simulation in the afternoon. As the problems in these domains are translated into computational terms, certain patterns recur, and engineers abstract these patterns into layers of settled technique.

Here is an example. I once consulted with a company that wanted to automate the design of some complex mechanical artifacts. The designer of these artifacts might have to make several dozen design decisions. I spent several weeks sitting with an engineer and marching through a stack of manuals for the design of this category of artifacts. We needed the answer to a critical question: in working forward from requirements to design, does the designer ever need to backtrack? Is it ever necessary to make a design decision that might have to be retracted later? If backtracking was required, the company's task would become much harder. After working several cases by hand, it became clear that backtracking was not only necessary but ubiquitious, and that the company needed to hire someone who could build a general-purpose architecture for the backtracking of parameterized constraints. Backtracking is an example of a structure that recurs frequently in the analysis of problem domains. The resulting analogy among domains can be pursued, and might be illuminating all around.

For the professors at MIT, then, engineering consists of a dialectical engagement between two activities: analyzing the ontology of a domain and realizing that domain's decision-making processes in the physical world. ("Realize" here means "make physically real" rather than "mentally understand".) Ideas about computational structure exist for the purpose of translating back and forth between these two aspects of the engineer's work. This is a profound conception of engineering. And nothing about it is specific to computers.

This story is appealing because it dissolves the concept of the computer, which normally connotes a sharp break with the past, into the great historical tradition of engineering design. It is certainly an improvement on the standard story based on computability theory. Still, I believe that both stories overlook one area in which design is different for computers than for anything else. That area pertains to language.

Computers, whatever their other virtues, at least give people something to talk about. Your friends in China may not be having the same weather, but they are having the same virus outbreaks. Computers not only transcend geographical boundaries; they also bridge disciplines. Physicists and literary critics may not be able to discuss their research, but they can discuss their computers. Soldiers and media artists struggle with the same software. This kind of universality arises precisely from the analytical phase of the design process. Computers, in this sense, provide a "trading zone" (Galison 1997) for discussions across different disciplinary languages. Computers are a shared layer in a wide variety of activities, and many activities have been reconstructed on top of the platform that computer standards provide. In these ways and more, computers are densely bound up with language (Hirschheim, Klein, and Lyytinen 1996; Swanson and Ramiller 1997; Theoharakis and Wong 2002; Weill and Broadbent 1998).

Saying this, however, does not identify what is distinctive about computers. The answer is: computers are distinctive in their relation to discourse. Discourses about the world -- that is, about people and their lives, the natural environment, business processes, social relationships, and so on -- are inscribed into the workings of computers. It does not follow, of course, that computers then turn around and reinscribe those discourses into the activities of the people who use them. Every social setting takes hold of its computers in its own distinctive way (Orlikowski 2000). But neither is a computer a blank slate. Every system affords a certain range of interpretations, and that range is determined by the discourses that have been inscribed into it. To understand what is distinctive about the inscription of discourses into computers, as opposed to their inscription into other sorts of artifacts, it helps to distinguish among three progressively more specific meanings of the idea of inscription.

(1) Shaping. Sociologists refer to the "social shaping of technology" (Dierkes and Hoffman 1992, MacKenzie and Wajcman 1985, Wiebe and Bijker 1992). Newly invented technologies typically exist in competing variants, but that political conflicts and other social processes eventually settle on particular designs. One prototype of social shaping is "how the refrigerator got its hum" (Cowan 1985): early refrigerators came in both electric and gas varieties, but the electric variety won the politics of infrastructure and regulation. Social shaping is also found in the styling of artifacts, for example tailfins on cars, and elsewhere. Social shaping analyses can be given for every kind of technology, and while language is certainly part of the process, the concept of social shaping does not turn on any specific features of language.

(2) Roles. One type of social shaping is found in the presuppositions that a technology can make about the people who use it. Akrich (1992a, 1992b) gives examples of both successes and failures that turned on the designers' understandings of users. A company created a device consisting of a solar cell, a battery, and a lamp, intended for use in countries with underdeveloped infrastructures. Rather than study the web of relationships in which those people lived, though, the company sought to make its device foolproof, for example by making it hard to take apart and by employing components that were not available on the local market. When ordinary problems arose, such as the short wire between the battery and the lamp, the users were unable to adapt it. By contrast, a videocassette player embodied elaborate ideas about the user, for example as a person subject to copyright laws, that its very design was largely successful in enforcing. In each case, the designer, through a narrative that was implicit or explicit in the design process, tried to enlist the user into a certain social role. And language is geared to the construction of narratives about roles and relationships. (See also Barley (1990), Feenberg (1991: 80-82), Latour (1996), Lea and Giordano (1997), Mackay, Carne, and Beynon-Davies 2000, Sharrock and Button (1997), and Woolgar (1991).) Even so, nothing here is specific to computers either.

(3) Grammar. Where computers are really distinctive is in their relationship to grammar. Systems analysis does not exactly analyze a domain, as the professors at MIT would have it. Rather, it analyzes a discourse for talking about a domain. (Or perhaps a domain must be understood as a discourse bound up with the objects it describes.) To be sure, much of the skill of systems analysis consists in assimilating this discourse to known techniques for realizing a decision-making process in the physical world (Suchman and Trigg 1993). But the substance of the work is symbolic. Computer people are ontologists, and their work consists of stretching whatever discourse they find upon the ontological grid that is provided by their particular design methodology, whether entity-relationship data models (Simsion 2001), object-oriented programming (Booch 1996), or the language-action perspective (Winograd and Flores 1986). In each case, the systems analyst performs a profound transformation upon the domain discourse. The discourse is taken apart down to its most primitive elements. Nouns are gathered in one corner, verbs in another corner, and so on, and then the elements are cleaned up and reassembled to create the code. In this way, the structure of ideas in the original domain discourse is thoroughly mapped onto the workings of the computational artifact. The artifact will not capture the entire meaning of the original discourse, and will distort many aspects of the meaning that it does capture, but the relationship between discourse and artifact is systematic nonetheless.

Computers, then, are a distinctive category of engineered artifact because of their relationship to language. So it is striking that computer science pays so little explicit attention to discourse as such. Of course, computer science rests on formal language theory, which began as a way of analyzing the structure of natural languages (Chomsky 1957). But there is more to discourse than the formal structures of language, and more to grammar than the structures of formalist linguistics. The discourses with which computer science wrestles are part of society. They are embedded in social processes, and they are both media and objects of controversy. Computers are routinely shaped through these controversies (Lessig 1999, Mansell and Silverstone 1996, Reidenberg 1998). This is the great naivete of computer science: by imagining itself to operate on domains rather than on discourses about domains, it renders itself incapable of seeing the discourses themselves, or the social controversies that pull those discourses in contradictory directions.

To understand the actual place of computing in the social world, it helps to take an institutional approach. An institution is a stable pattern of social relationships. Every institution defines a categorical structure (a set of social roles, an ontology and classification system, etc), a terrain of action (stereotyped situations, actions, strategies, tactics, goals), values, incentives, a reservoir of skill and knowledge, and so on (Commons 1970 [1950], David 1990, Fligstein 2001, Goodin 1996, Knight 1992, March and Olsen 1989, North 1990, Powell and DiMaggio 1991, Reus-Smit 1997). The main tradition of computer system design was invented for purposes of reifying institutional structures (cf. Callon 1998).

Institutional study of computing begins in earnest with research at UC Irvine in the late 1970's (e.g., Danziger, Dutton, Kling, and Kraemer 1982). These researchers were interested in the organizational dynamics of computing at a time when computing was new. They worked in an intellectual climate that was far too credulous about the claims of rationalism, and sought to reassert the political and symbolic dimensions of computing in the real world.

Since that time, the world has changed a great deal in some ways -- and very little in others. The Internet changed the technology of computing, for example the practical possibility of interorganizational information systems. The Internet also changed the ideological climate around computing. It was no longer necessary to persuade anyone of the significance of computing as a topic of social inquiry, even if the social sciences proper have not exactly made computing central to their view of the world. Rationalism is out, and the central authorities that had long been associated with mainframe computing have been replaced by an ideal of decentralization. Of course, ideology should be distinguished from reality, and in the real world computing is still heavily political and symbolic. The institutionalist research program of the 1970's is still alive and well, and twenty years of technological change have done little to outdate it.

To examine in detail how the world has changed, we must dig our way from the landslide of ideology that engulfed public discourse about computing during the Internet bubble of the 1990's. We must return to the starting-point of all serious social inquiry into computing: the evils of what Veblen (1921) called "technological determinism". Technological determinism -- which can be either an openly avowed doctrine or an inadvertent system of assumptions -- is usefully divided into two ideas: (1) that the directions of technical development are wholly immanent in the technology, and are not influenced by society; and (2) that the directions of social development are likewise entirely driven by the technology. These two ideas are both wrong. Social forces shape technology all the time. And the directions of social development are not driven by the technology itself, but rather by the incentives that particular institutions create to take hold of the technology in particular ways (Agre 2002, Avgerou 2002, Bud-Frierman 1994, Dutton 1999, Gandy 1993, Kling 1989, Mansell and Steinmuller 2000, Preston 2001).

Progress in the social study of computing requires us to discover and taxonomize the forms that technological determinism takes in received ways of thinking. Two of these might be called discontinuity and disembedding (cf. Brown and Duguid 2000). Discontinuity is the idea that information technology has brought about a sudden change in history. We supposedly live in an "information society", a "network society", or a "new media age" whose rules are driven by the workings of particular technologies. These theories are wrong as well. Of course, new information technologies have participated in many significant changes. But many other things are happening at the same time, yet other things are relatively unchanged, and the changes that do occur are thoroughly mediated by the structures and meanings that were already in place. It is easy to announce a discontinuity and attribute it to a single appealing trend, but doing so trivializes a complex reality.

Disembedding supposes new technologies to be a realm of their own, disconnected from the rest of the world. An example is the concept of "cyberspace" or the "online world", as if everything that happened online were unrelated to anything that happened offline. The reality is quite different. The things that people do on the Internet are almost always bound up with the things that they do elsewhere (Friedland 1996, Miller and Slater 2000, Wynn and Katz 1997). The "online world" is not a single place, but is divided among various institutions -- banking sites, hobby sites, extended family mailing lists, and so on, each of them simply annexing a corner of the Internet as one more forum to pursue an existing institutional logic, albeit with whatever amplifications or inflections might arise from the practicalities of the technology in use. People may well talk about the Internet as a separate place from the real world, and that is an interesting phenomenon, but it is not something that we should import into serious social analysis.

Institutional analysis thus encourages us to make distinctions. If we do not enumerate institutions (education, medicine, family, religion, politics, law), then the temptation to overgeneralize is overwhelming. We will start from the single case that interests us most, and we will pretend that the whole world worked that way.

Consider, for example, the famous cartoon in which a computer-using dog asserts, "On the Internet, nobody knows you're a dog" (Steiner 1993). Unfortunately, the cartoon dog's assertion has often been taken seriously by scholars even though it is not true. When people think about identity on the Internet, they generally think of games where people adopt fictional personae, trading sites like eBay where most people use pseudonyms, or loosely woven discussion forums where people never learn much about one another as individuals. But these contexts represent only part of the Internet, reflecting the workings of certain kinds of institutions. Many other institutions work in other ways. In academia, for example, researchers are generally well-identified to one another. They meet at conferences, read one another's research, and so on. They are not interested in being anonymous online; indeed, the institutions of research create incentives for promoting one's work, and thus one's identity, at every turn.

Institutional analysis, by applying the same concepts in several contexts, also captures important commonalities. An investigator can apply a given framework to the analysis of a given case, extend the framework by making bits and pieces of novel observation, and then advertise that novel result as a heuristic resource for the analysis of other cases in the future. Investigators can read one another's analyses, and in this way they can build up a toolkit of concepts and precedents, each of which stimulates research by posing questions to whatever case is studied next. That is how interpretive social science works. It is similar to the methods of engineering that I described at the outset, except that it has not been so intricately constrained by the demands of physical realization.

How, then, from an institutional point of view, does the world change with the advent of the Internet? One answer lies in the exact relationship between information technology and institutions. Recall that information technologies are designed by starting with language, and that this language has generally originated with the categorical structures of an institution. Information technology is often said to be revolutionary -- that much is constant in the ideology from the late 1970's to the present -- but as the UC Irvine group pointed out, the actual practice of computing is anything but. Indeed the purpose of computing in actual organizational practices is often to conserve and even to rigidify existing institutional patterns (cf. Cornford 2001). As studies of print (Zaret 2000) and the telephone (Fischer 1992) have shown, media are often taken up to good effect by people who are trying to be conservative. Yet history is not just the story of people's intentions. History is a story of local initiative and global emergence that needs to be investigated concretely in each case (Marvin 1988, Orlikowski 2001).

The specific role that the Internet has played in this story is illustrated by the concept of layering (Messerschmitt 1999). The Internet is a layered protocol, and the layering principle is partially credited with the Internet's success. But the Internet hardly invented layering, and is best seen as one more expression of a pervasive principle of order. If a complex functionality is broken into layers, then the bottom layers can be shared among many purposes. That helps them to propagate, and to establish the economies of scope and network effects that make them work as businesses for the industrial ecosystem around them. The argument is roughly Simon's, except that it is the whole ecosystem that is stable, not just the Internet as a modular artifact.

The principle of layering operates in a wide range of digital communication technologies, even if the bundling imperatives of companies like Microsoft provide a countervailing force. Layers have an important property: they are appropriable. Layers of nonproprietary public networks are especially appropriable (Feenberg 1995), and the appropriability of the Internet is a miracle in many ways and a curse in others. To analyze the social and technical dynamics that result, we need serious concepts. The traditional institutional studies of computing assumed that these matters were fought out in the architecture of computing in a given organization. This analysis works best in the context of bespoke organizational applications, especially ones that affect the resource allocations and political positions of diverse organizational players. It can also be generalized to applications that are relatively specific to the workings of a given institutional field, for example in the work by Kling, Fortuna, and King (2001) on emerging architectures for scholarly publishing in medicine. Standard political issues arise: collective action problems among the authors, professional society staffs pursuing their own interests instead of those of their constituencies, copyright interests and their influence on the legislative system, cultural norms that stigmatize change, and the practical difficulties facing would-be change-entrepreneurs. This is the institutional politics of technology-enabled institutional change. Meanwhile, new techologies mediate the political process in new ways, and the cycle closes completely.

The dynamics of these conflicts change when individual parties can go off and change the world on their own. When new systems cannot be built without large organizations, politics still has a central focus. In a world of appropriable layers, however, it is easier to create facts on the ground. The decision-making process changes. It is no longer a debate about how to design a single system but a competition among multiple systems. Of course, technical innovations have always led to competing designs. In a world of layers, however, the costs of joining the competition are reduced.

The point is not exactly that the institution is an arbitrary authority that has been circumvented. Somebody's new back-room invention will only change anything if it is aligned to some degree with real forces operating within the institution, such as the incentives that the researchers experience to publicize their research. In a sense the institution splits: its forces are channeled partly through the established mechanisms and partly through the renegade initiatives that someone has built on the Internet. That is when the politics starts, itself structured and organized within the framework of the institution -- after all, institutions are best understood not as zones of mindless consensus but as stabilized mechanisms for the conduct of disputes over second-order issues (Holm 1995).

The Internet, from this perspective, does not so much replace institutions as introduce new elements into the ongoing dynamics of controversy within them. The possibility of creating new technologies on top of open network layers simply represents a wider range of potential tactics within an existing and densely organized logic of controversy. Creating new technologies does create new facts on the ground, but it does not, of itself, create new institutions. The gap between technologies and institutions is large, and many institutional arrangements will need to be settled aside from the system architecture.

But this is where the impact of the Internet is greatest: in the relationship between information technologies and institutions. Historically, systems analysts have been taught to transcribe institutional orders directly onto the applications they build. One reason for this was organizational: an application may fit better into an organization if it is aligned with the existing ecology of practices. Of course, the process of appropriating a new technology routinely shifts an ecology of practices into a new and perhaps unanticipated equilibrium (Berg 1999, Orlikowski 2001). But precisely because the shifts are hard to anticipate, aligning the categorical structure of the technology with that of the existing institution may be almost the only practical strategy (cf. Bourdieu 1981). Another reason is pure conservatism: organizational practices evolve at a deeper level than anyone can explain or rationalize, and it is often better to transcribe these practices with a certain willful superficiality than to follow the advice of the "reengineering" revolutionaries (Hammer 1990) by trashing history and starting over.

A final reason pertains to social relations: computer systems by their nature entrain their users to the categorical structures that they embody, for example by requiring the users to swipe their magstripe cards every time they undertake particular sorts of transactions, and a computer system whose categories map directly onto the categories of the institution will thereby, other things being equal, make people easier to control. And this last point is key: the Internet, by providing an extremely generic and appropriable layer that other systems can easily be built upon, disrupts this classic pattern of social control. The gap between technology and institution yawns, and into the gap pour a wide variety of institutional entrepreneurs, all trying to create their own facts on the ground.

Fortunately for the powers that be, these would-be entrepreneurs often fail to understand the opportunity that has been handed to them. For example, many of them overestimate what can be accomplished by writing code. Programmers build exciting new systems and then they watch in dismay as they fail to mesh with the institutional orders around them. The understanding is perhaps now dawning that architecture, while certainly a variety of politics, is by no means a substitute for politics.

Precisely through its explosive success, the Internet has disserved us, teaching us a false model of institutional change. The appropriability of layered public networks is indeed a new day in the dynamics of institutions, but it is not a revolution. It brings neither a discontinuous change nor a disembedded cyber world. It is, to the contrary, another chapter in the continuous renegotiation of social practices and relationships in the world we already have. The organized social conflict that is endemic to democracy has not gone away, and the lessons of democracy are as clear as they ever were.

References

Philip E. Agre, Yesterday's tomorrow, Times Literary Supplement, 3 July 1998, pages 3-4.

Philip E. Agre, Real-time politics: The Internet and the political process, The Information Society 18(5), 2002, pages 311-331.

Madeleine Akrich, The de-scription of technical objects, in Wiebe E. Bijker and John Law, eds, Shaping Technology/Building Society: Studies in Sociotechnical Change, Cambridge: MIT Press, 1992a.

Madeleine Akrich, Beyond social construction of technology: The shaping of people and things in the innovation process, in Meinholf Dierkes and Ute Hoffman, eds, New Technology at the Outset: Social Forces in the Shaping of Technological Innovations, Frankfurt: Campus Verlag, 1992b.

Chrisanthi Avgerou, Information Systems and Global Diversity, Oxford: Oxford University Press, 2002.

Stephen R. Barley, The alignment of technology and structure through roles and networks, Administrative Science Quarterly 35(1), 1990, pages 61-103.

Marc Berg, Accumulating and coordinating: Occasions for information technologies in medical work, Computer Supported Cooperative Work 8(4), 1999, pages 373-401.

Wiebe E. Bijker and John Law, eds, Shaping Technology/Building Society: Studies in Sociotechnical Change, Cambridge: MIT Press, 1992.

Grady Booch, Object Solutions: Managing the Object-Oriented Project, Menlo Park, CA: Addison-Wesley, 1996.

Pierre Bourdieu, Men and machines, in Karin Knorr-Cetina and Aaron V. Cicourel, eds, Advances in Social Theory and Methodology: Toward an Integration of Micro- and Macro-Sociologies, Boston: Routledge and Kegan Paul, 1981.

John Seely Brown and Paul Duguid, The Social Life of Information, Boston: Harvard Business School Press, 2000.

Lisa Bud-Frierman, ed, Information Acumen: The Understanding and Use of Knowledge in Modern Business, London: Routledge, 1994.

Michel Callon, ed, The Laws of the Markets, Oxford: Blackwell, 1998.

Noam Chomsky, Syntactic Structures, 's-Gravenhage: Mouton, 1957.

John R. Commons, The Economics of Collective Action, Madison: University of Wisconsin Press, 1970. Originally published in 1950.

James Cornford, The virtual university is ... the university made concrete?, Information, Communication, and Society 3(4), 2001, pages 508-525.

Ruth Schwartz Cowan, How the refrigerator got its hum, in Donald MacKenzie and Judy Wajcman, eds, The Social Shaping of Technology: How the Refrigerator Got Its Hum, Milton Keynes: Open University Press, 1985.

James N. Danziger, William H. Dutton, Rob Kling, and Kenneth L. Kraemer, Computers and Politics: High Technology in American Local Governments, New York: Columbia University Press, 1982.

Paul A. David, the dynamo and the computer: An historical perspective on the modern productivity paradox, American Economic Review 80(2), 1990, pages 355-361.

Meinholf Dierkes and Ute Hoffman, eds, New Technology at the Outset: Social Forces in the Shaping of Technological Innovations, Frankfurt: Campus Verlag, 1992.

William H. Dutton, ed, Society on the Line: Information Politics in the Digital Age, Oxford: Oxford University Press, 1999.

Andrew Feenberg, Critical Theory of Technology, New York: Oxford University Press, 1991.

Andrew Feenberg, Alternative Modernity: The Technical Turn in Philosophy and Social Theory, Berkeley: University of California Press, 1995.

Claude S. Fischer, America Calling: A Social History of the Telephone to 1940, Berkeley: University of California Press, 1992.

Neil Fligstein, The Architecture of Markets: An Economic Sociology of Twenty-First-Century Capitalist Societies, Princeton: Princeton University Press, 2001.

Lewis A. Friedland, Electronic democracy and the new citizenship, Media, Culture and Society 18(2), 1996, pages 185-212.

Peter Galison, Image and Logic: A Material Culture of Microphysics, Chicago: University of Chicago Press, 1997.

Oscar H. Gandy, Jr., The Panoptic Sort: A Political Economy of Personal Information, Boulder: Westview Press, 1993.

Robert E. Goodin, ed, The Theory of Institutional Design, Cambridge: Cambridge University Press, 1996.

Michael Hammer, Reengineering work: Don't automate, obliterate, Harvard Business Review 68(4), 1990, pages 104-112.

Rudy Hirschheim, Heinz K. Klein, and Kalle Lyytinen, Exploring the intellectual structures of information systems development: A social action theoretic analysis, Accounting, Management and Information Technologies, 6(1-2), 1996, pages 1-64.

Petter Holm, The dynamics of institutionalization: Transformation processes in Norwegian fisheries, Administrative Science Quarterly 40(3), 1995, pages 398-422.

Rob Kling, Theoretical perspectives in social analysis of computerization, in Zenon W. Pylyshyn and Liam J. Bannon, eds, Perspectives on the Computer Revolution, second edition, Norwood, NJ: Ablex, 1989.

Rob Kling, Joanna Fortuna, and Adam King, The real stakes of virtual publishing: The transformation of E-Biomed into PubMed Central, Working Paper 01-03, Center for Social Informatics, Indiana University, 2001. Available on the Web at .

Jack Knight, Institutions and Social Conflict, Cambridge: Cambridge University Press, 1992.

Bruno Latour, Social theory and the study of computerized work sites, in Wanda Orlikowski, Geoff Walsham, Matthew R. Jones, and Janice I. DeGross, eds, Information Technology and Changes in Organizational Work, London: Chapman and Hall, 1996.

Martin Lea and Richard Giordano, Representations of the group and group processes in CSCW research: A case of premature closure?, in Geoffrey C. Bowker, Susan Leigh Star, William Turner, and Les Gasser, eds, Social Science, Technical Systems and Cooperative Work: Beyond the Great Divide, Mahwah, NJ: Erlbaum, 1997.

Lawrence Lessig, Code: And Other Laws of Cyberspace, New York: Basic Books, 1999.

Hugh Mackay, Chris Carne, and Paul Beynon-Davies, Reconfiguring the user: Using Rapid Application Development, Social Studies of Science 30(5), 2000, pages 737-757.

Donald MacKenzie and Judy Wajcman, eds, The Social Shaping of Technology: How the Refrigerator Got Its Hum, Milton Keynes: Open University Press, 1985.

Robin Mansell and Roger Silverstone, eds, Communication by Design: The Politics of Information and Communication Technologies, Oxford: Oxford University Press, 1996.

Robin Mansell and W. Edward Steinmueller, Mobilizing the Information Society: Strategies for Growth and Opportunity, Oxford: Oxford University Press, 2000.

James G. March and Johan P. Olsen, Rediscovering Institutions: The Organizational Basis of Politics, New York: Free Press, 1989.

Carolyn Marvin, When Old Technologies Were New: Thinking About Electric Communication in the Late Nineteenth Century, New York: Oxford University Press, 1988.

David G. Messerschmitt, Networked Applications: A Guide to the New Computing Infrastructure, Morgan Kaufman, 1999.

Daniel Miller and Don Slater, The Internet: An Ethnographic Approach, New York: New York University Press, 2000.

Douglass C. North, Institutions, Institutional Change, and Economic Performance, Cambridge: Cambridge University Press, 1990.

Wanda J. Orlikowski, Using technology and constituting strutures: A practice lens for studying technology in organizations, Organization Science 11(4), 2000, pages 404-428.

Wanda J. Orlikowski, Improvising organizational transformation over time: A situated change perspective, in JoAnne Yates and John Van Maanen, eds, Information Technology and Organizational Transformation: History, Rhetoric, and Practice, Thousand Oaks, CA: Sage, 2001.

Walter W. Powell and Paul J. DiMaggio, eds, The New Institutionalism in Organizational Analysis, Chicago: University of Chicago Press, 1991.

Paschal Preston, Reshaping Communications: Technology, Information and Social Change, London: Sage, 2001.

Joel R. Reidenberg, Lex Informatica: The formulation of information policy rules through technology, Texas Law Review 76(3), 1998, pages 553-593.

Christian Reus-Smit, The constitutional structure of international society and the nature of fundamental institutions, International Organization 51(4), 1997, pages 555-589.

Wes Sharrock and Graham Button, Engineering investigations: Practical sociological reasoning in the work of engineers, in Geoffrey C. Bowker, Susan Leigh Star, William Turner, and Les Gasser, eds, Social Science, Technical Systems and Cooperative Work: Beyond the Great Divide, Mahwah, NJ: Erlbaum, 1997.

Herbert A. Simon, The Sciences of the Artificial, Cambridge: MIT Press, 1969.

Graeme Simsion, Data Modeling Essentials, second edition, Scottsdale, AZ: Coriolis, 2001.

Peter Steiner, untitled cartoon, The New Yorker, 5 July 1993, page 61.

Lucy A. Suchman and Randall H. Trigg, Artificial intelligence as craftwork, in Seth Chaiklin and Jean Lave, eds, Understanding Practice: Perspectives on Activity and Context, Cambridge: Cambridge University Press, 1993.

E. Burton Swanson and Neil C. Ramiller, The organizing vision in information systems innovation, Organization Science 8(5), 1997, pages 458-474.

Vasilis Theoharakis and Veronica Wong, Marking high-technology market evolution through the foci of market stories: The case of local area networks, Journal of Product Innovation Management 19, 2002, pages 400-411.

Thorstein Veblen, The Engineers and the Price System, New York: Viking, 1921.

Peter Weill and Marianne Broadbent, Leveraging the New Infrastructure: How Market Leaders Capitalize on Information Technology, Boston: Harvard Business School Press, 1998.

Terry Winograd and Fernando Flores, Understanding Computers and Cognition: A New Foundation for Design, Norwood, NJ: Ablex, 1986.

Steve Woolgar, Configuring the user: The case of usability trials, in John Law, ed, A Sociology of Monsters: Essays on Power, Technology and Domination, London: Routledge, 1991.

Eleanor Wynn and James E. Katz, Hyperbole over cyberspace: Self-presentation and social boundaries in Internet home pages and discourse, The Information Society 13(4), 1997, pages 297-327.

David Zaret, Origins of Democratic Culture: Printing, Petitions, and the Public Sphere in Early-Modern England, Princeton: Princeton University Press, 2000.