Infrastructure and Institutional Change in the Networked University

Philip E. Agre
Department of Information Studies
University of California, Los Angeles
Los Angeles, California 90095-1520

Information, Communication, and Society 3(4), 2000, pages 494-507.

A revised version appears in William H. Dutton and Brian D. Loader, eds, Digital Academe: The New Media and Institutions of Higher Education and Learning, London: Routledge, 2002.

Please do not quote from this version, which may differ slightly from the version that appears in print.

5300 words.


Abstract. Many people believe that information technology will bring massive structural changes to the universities. This paper draws on concepts from both computer science and social theory to explore what these structural changes might be like. The point of departure is the observation that the interaction between information technology and market economics creates incentives to standardize the world. Standardization can be a force for good or evil, depending on how it is done, and this paper develops normative ideas about the relation between the forces of standardization and the places in which university teaching is done. Information technology allows these places to be more diverse than in the past, and a good rule of thumb is that the places in which learning occurs should be analogous in their structure and workings to the places in which the learned knowledge will be used. Universities can support this increased diversity of learning places with appropriate structural reforms, including decentralized governance and explicit attention to certain aspects of the university organization, such as media services and the career center, that have historically been marginalized.


The twentieth century taught us to be skeptical of revolutions. Proposals for revolutionary social change have invariably rested on superficial ideas about the world, and as a result they have changed both too much and too little, with tragic results. What, then, are we to make of the revolution that is supposedly being brought by networked information technology? After all, the case for an information revolution would seem straightforward. Information is everywhere, and every interface in the social world is defined in large measure by the ways in which people exchange information. Information technology liberates the content of these exchanges -- the bits, in contemporary shorthand -- from the physical media -- the atoms -- through which the exchanges have formerly taken place (e.g., Mitchell 1995, Negroponte 1995). This contrast between atoms and bits sounds like something out of Presocratic philosophy, with its attempts to reduce the universe to a small number of fundamental constituents, yet perhaps it is the archaic character of the atoms-and-bits metaphysics that gives it such a hold over our myth-starved imaginations. This is it: the one secret, the one key that turns the lock. Separate the bits from the atoms, the story goes, and we will be freed from the encumbrances of the physical world.

The promise and danger of this story is that it offers a straightforward blueprint for the reinvention of every institution of social life. And in perhaps no other case are the revolutionary prescriptions so profound as they are for the university. After all, the university is in some sense about information, the life of the mind, for which the physical world is beside the point. Dissolve the campus, dissolve the classroom, dissolve the library, and the information-exchanges of teaching and research will transcend the artificial constraints of geography. Students will be matched with the best teachers, scholarly communities will achieve the cosmopolitan ideals of the Enlightenment in real time, and the rigidities of the bricks-and-mortar, pencil-and-paper, chalk-and-talk age will be discredited once and for all. That is the promise, and the vast millennialist traditions of the West make such a promise easy to say, easy to understand, and easy to want.

The question, of course, is whether it is true. Much of it can be granted. Networked information technology certainly provides the university with both the opportunity and the necessity of renegotiating its linkages with every other institution in society: with secondary education, with industry, with government, with the media, and with the political system, among others. And the renegotiation of these linkages certainly holds the potential for pulling the university apart, as its many components become more closely connected with the great diversity of other social institutions with which they interact. All of this is possible. Before we can judge whether it is inevitable, and whether it is desirable, it will be necessary to consider the question in much greater depth: what, after all, is the university, how is it related to everything else in the world, and what is really at stake in the many attempts to reinvent it for a networked world?

I will take as my point of departure a single phenomenon: that information and communications technologies amplify incentives to standardize the world (Rochlin 1997: 212). These incentives need not be overwhelming, but they can be considerable. When participants in a market, for example, can communicate data over long distances, it becomes useful to standardize the goods that are bought and sold. That way, goods that might be available for sale at widely dispersed locations can readily be compared for their properties and prices. In a sense these kinds of standards are necessary for the very existence of a market; otherwise it would make no sense to refer to a given commodity -- that is, a given standardized type of commodity -- as having a price that emerges through the equilibrium of supply and demand. Likewise, information and communications technologies reward organizations that standardize their processes: the very distinction between line and staff emerges when staff work, which is overwhelmingly information work, can be applied to the administration of large amounts of standardized line work. Finally, political systems can deliberate more rationally when prospective rules apply in a uniform way to a society that is standardized in the relevant respects. Technological societies have not been unanimous about the virtues of this kind of standardization -- far from it. Yet at the same time, the processes of standardization that information and communication technologies encourage have been largely invisible to most ordinary people. After all, nothing is more esoteric or low-key than a standards organization, and hardly anyone is in a position to see the rising tide of standards as anything except an accumulation of small changes in their own local circumstances.

Not everything can or should be standardized, of course, and the practical work of building an information infrastructure consists in large part of separating the things to be standardized from the things that will remain diverse, and to reconcile the inevitable tensions that arise at the border between the two (cf. Star and Ruhleder 1996). Although these tensions will be part of any infrastructure project, they are especially acute in the case of the emerging generation of networked tools for supporting in fine detail the activities of research and teaching in higher education. One measure of these tensions is the conceptual complexity of the boundary between the network and the applications that the network supports. To say that the network's job is to move bits from point A to point B is simple enough, and research on that type of network can proceed without detailed knowledge of the uses to which it will be put. Things become more complicated, however, when the network is supposed to offer services with specified quality or other such guarantees. Then it becomes necessary to survey prospective applications for the guarantees they require, and to reckon the utility of those applications against the difficulty of supporting them. More complicated still are those service layers, also standardized as part of a ubiquitous infrastructure, that embody some model of the interactions and relationships that they are supposed to support, or of the worldly subject matters to which those interactions and relationships pertain. Examples include the services needed to build digital libraries or coordinate complex research collaborations. That kind of information infrastructure is easy to get wrong, given that nobody is likely to possess an adequate substantive model of the activities that the infrastructure is supposed to support. And an infrastructure that gets such things wrong can foreclose the possibilities that it was supposed to open up.

For this reason, a style of research has begun to mature that works systematically back and forth between network architecture and the technology and sociology of network applications. This back-and-forth movement is a way of learning in its own right, and it deserves to be spelled out and systematized to the degree that it can be. I want to draw out and develop its consequences in a particular area: its role in the evolution of the places in which university research and teaching are done. By "places" here I mean more than geographic coordinates; I want to identify places in terms of the patterns of activity that happen within them, and the social and conceptual systems within which those activities are organized (Curry 1996). A theater class, for example, is a place not simply on account of the room number, or even simply because of the architectural features that make it well-suited to theater work. Rather, a theater class is a place because of the social practices that routinely go on there: ways of talking, thinking, interacting, learning, changing, and so on. A physics lab is likewise a place for reasons beyond its equipment, and a mathematician's office is a place because of what mathematicians do with a chalkboard.

Viewed in this light, a university campus is an extraordinarily diverse assemblage of places -- it is really a sort of meta-place that provides all of those places with a common administrative apparatus and physical plant. Now turn this picture on its side, and sort the university world in terms of the various types of places that it contains. Put all of the world's theater classes in one bin, all of the physics labs in another bin, and likewise the mathematicians' offices, registrars' offices, parking offices, scheduling offices, network administration offices, and so on. Networked information technology creates incentives, or more accurately it amplifies existing incentives, to do two things: first, to standardize all of the places in the university world in which the same activities occur, and second, to interconnect those places so that eventually they merge, in some useful sense, into a single site of social practice. None of this will happen overnight, of course, and countervailing forces may prevent it from happening at all. But the effect is surely real, and it will be useful to draw out its consequences in the areas of research and teaching.

In the area of research the process is further along, and for a familiar reason: the institutions of research create powerful incentives for researchers to network themselves both professionally and technically with their peers in other universities and research organizations. These so-called invisible colleges (Crane 1972) are in many ways more visible to the researchers than the physical campuses where they organize their places of work. The research world thus has a matrix structure: on one axis are the campuses and on the other axis are the research communities (Alpert 1985). For all its efficiency, this system incorporates some tremendous tensions: the interconnections within the invisible colleges are subserved in large measure by an expensive infrastructure that must be paid for by the campus (Noam 1995). When the invisible colleges are loosely interconnected, the accounting is simple enough. But now the invisible colleges are now becoming more and more real. This reality can be measured crudely through the bandwidth of data connection between labs in a given field, but more importantly it can be measured in institutional terms. Physicists have for some time organized long-term experimental projects that include hundreds of researchers at scores of universities, but now it is possible to speak without irony of a "center" that is located on several different campuses, for example the National Science Foundation-funded National Center for Geographic Information and Analysis at UC Santa Barbara, the University of Maine, and the State University of New York at Buffalo. As this example illustrates, research funding policies have helped increase the strength of disciplines relative to the universities that house their members (Alpert 1985: 253). And on a technical level, research communities have begun to develop network services that encode their own distinctive methods, practices, and concepts; and as broadband services become available this trend is accelerating. Some groups speak of these networked infrastructures as "collaboratories" (Finholt and Olson 1997, Wulf 1993), thereby recognizing, at least rhetorically, that the various places of research have been gathered into single functioning sites, and in some cases the interconnection is so detailed that globally distributed research groups must routinely negotiate the minutiae of their differing work schedules and interactional styles.

The scientific value of this development is not in serious doubt. The harder question is what it means for the university. It is a centrifugal force, binding each community's participants more closely to one another while enabling each community to evolve ever more rapidly away from the others. The effect is surely real. But the dynamics of information technology are more complicated, and more promising. Observe, first of all, that research communities themselves are not entirely discrete. They too have something of a matrix structure, whose axes are the subject matter being studied and the methods being employed to study it. Researchers who study the weather obviously have something in common, and they are accordingly grouped within the profession of meteorology. But researchers who engage in large-scale computer simulations also have something in common, regardless of what they are simulating, and these researchers, too, have an incentive to be networked with one another. Mathematical structures have long provided deep and unexpected points of contact for researchers from seemingly distant fields, as have the important theoretical writers in the humanities and social sciences, and now computational structures play this role as well. Information technology standards provide substantial economies of scale, and for this reason and others computers now provide otherwise very different researchers with something to talk about. Each field can set out to develop its own infrastructure in its own independent direction, but in the long run the economics of standards, including the need for compatibility in a matrix-organized research world, will create incentives to abstract out one generalizable service layer after another. In many cases, such as teleconferencing, the generalizable functionality will seem obvious, but even then a great deal can turn on fine details, as well as on the ability to integrate one service with a whole environment of others. And to the extent that a given field's collaboratory embodies that field's distinctive theoretical categories or epistemological commitments, painful negotiation may be required to abstract a common platform.

There follows an important lesson about the nature of standardization. In common language standardization suggests the imposition of an arbitrary uniformity. But the concept is more complex. Standards can be a force for either uniformity or diversity, depending on how they are designed. If information, as Bateson (1972: 453) says, consists of the differences that make a difference, the key is to preserve information by standardizing everything that does not make a difference. The point seems obvious enough when stated in this abstract form, but its consequences are pervasive. Thousands of universities have arisen and evolved independently of one another. They do experience many pressures toward isomorphism (Powell and DiMaggio 1991), but their practices diverge in innumerable ways as well. The great opportunity here lies in the efficiencies that are to be gained by standardizing and networking all of the practices, in accounting systems for example, whose differences make no important difference to the local circumstances of a given campus. And the dangers are equally great. One danger is that we will standardize the wrong things, as can happen if particular standards achieve critical mass on a particular type of campus and then spread through economic pressures to campuses where they do not belong (Agre 1999). This has already happened with the practice of giving letter grades, and it could happen with the admissions process (impelled by economies of scale in Web-based submission of application forms), the academic calendar and the hours at which classes start and end (impelled by the spread of synchronous classes across campuses), the internal structure of individual majors (impelled by competition for students who assemble their own study programs from individual courses "a la carte"), and the introductory courses in many subjects (impelled by all of these factors and more). Indeed, in each case the pressures for standardization are well under way, for example in the tendency of a few introductory textbooks to dominate the market in many fields.

Let us apply these lessons to the institutional problems of university teaching. By now everyone is familiar with a certain simple story: that classes will be conducted over the Internet, that students will pick and choose the classes that best suit them, that the resulting competition will improve the prevailing quality of instruction, and that the methods and resources that are employed in teaching will be determined not by ancient tradition but by the value that students place on the various course offerings as evidenced by their willingness to pay for them. For example, the long-standing question of the relative value of lectures and discussions will be definitively answered, and the answers may well surprise us, given the much greater opportunities for technology-supported economies of scale that lectures provide. Networked software can be interactive and finely modularized, can incorporate its own assessment mechanisms, and so on.

This picture has much to recommend it. But as it stands, it is far too simple. For one thing, it passes over the many activities within the university that do require physical proximity, such as theater and dance, laboratory classes requiring access to expensive equipment, athletics, and socializing. And consider the questions of governance that it passes over. It claims to address such questions squarely through the market mechanism. But this is half of the story. The market in courses will be a complicated place because the courses themselves will be complicated; institutions and standards will be needed to enable vendors to advertise the courses, exchange money, perform accreditation, record credentials and grades, exchange and store the many kinds of data, represent which courses provide prerequisites for which others and which can be assembled into large degree programs, clear copyrights, and so on. Each of these standards represents a possible point of leverage, whether for a software vendor, an accrediting organization, a regulatory agency, or a university monopoly that might survive the same kind of shake-out that is producing monopolies in other areas of networked services. The revolutionary picture of a market in higher education, in other words, is quite capable of being true on the surface and false underneath.

The revolutionary picture also provides an inadequate account of the diversity of university education. On the surface, of course, that diversity would seem to be its whole point. Classes within the framework of a conventional university are constrained to uniform spaces and times, and uniform staffing arrangements, that the market picture would surely explode as each course sought its own economically optimal combination of arrangements. But universities fit together in more ways than the purely artificial. Teachers who also engage in research have incentives to remain up-to-date in their subject matter areas. Uniformity of courses enables students to combine different topics in a reasonably convenient way. And students must be able to follow trajectories through different kinds of courses, from freshman lecture courses, whose potential for considerable economies of scale is hardly in dispute, to advanced graduate education that is integrated with the institutions of research and ought to be much more so.

Perhaps, as the revolutionaries envision, competitors will come along that organize themselves to address one particular teaching model, so that the market becomes segmented by field, by level of instruction, or by other factors. Will it still be possible to get a coherent education in that world? Will all of those markets behave as they ought to, or will some of them collapse into monopoly through their increased economies of scale [1]? Will the universities lose their vast abilities to cross-subsidize various fields and services (Noam 1995), and will this necessarily be a good thing? In short, what do we really know about that world? We do not know much, I would suggest, because the simple market story does not incorporate any substantive understanding of what education really is beyond the industrial distribution metaphor in phrases such as "instructional delivery". And this is where more analysis is needed. Much could be accomplished, for example, by developing an analysis similar to that of the interconnected places of research activity (cf. Brown and Duguid 1998).

For education, however, another relationship between places is even more important: the relationship between the place in the university where teaching happens and the place in the world where the material being taught will be put into practice. This relationship varies, and it is a matter of conflict. The general idea is that effective teaching requires that the place of learning and the place of doing be homologous. Perhaps the two places cannot be identical, but the practiced patterns of activity, and not just the mental contents, must somehow carry over (Scribner and Cole 1981). Let us consider how, and let us imagine the potential role of information technology.

The various styles of teaching can be analyzed in terms of their particular method of approximating this homology. In apprenticeship, as in advanced graduate education, the two places are the same, except for the role of the student's advisor. In an internship they are the same as well, ideally with some additional concurrent forms of supervision and instruction back in the places of the university. A teaching laboratory is a stylized and sanitized analog of the working laboratory, and the scientific and mathematical story problem is somewhat homologous to the narrative practices by which problems are framed in real scientific and technical work, even if a great deal is different about the social relations, the equipment, and the information that is available. The liberal arts classroom is held to be homologous with the public sphere with its demands on one's spoken and written voice, and more abstractly with the critical-thinking dimension of any real-world place. Students are often skeptical of these claimed homologies, and they often call for greater reality, relevance, usefulness, and connection between the university's places and the places to which they aspire. It is a reasonable demand, even if opinions will differ about its most essential features. The core of the students' complaint, of course, is that faculty learned the subject matter in graduate school, so that their undergraduate classrooms are more closely homologous to the places of graduate school than to the places where the students themselves are headed.

The great opportunity, then, is to use networked information technology to connect the places of university teaching with other places in the world. We can imagine many such connections. Classes can open video links to those other places, or they can be conducted remotely while the students literally occupy those places. Working professionals can visit classes and jury the students' projects. Many such experiments are already under way [2]. More fundamentally, the same kinds of field-specific technologies that allow research disciplines to coalesce into real-time online collaboratories could support the creation of hybrid sites of teaching and working, the details of which would depend on the particularities of the field: necessary equipment, forms of interaction, genres of documents written and read, the relationship between explicit and tacit learning, the role of embodied skills, the various forms of legitimate peripheral participation (Lave and Wenger 1991) that both the classroom and workplace might afford, the relative role of verbal interaction and formal notation, the cooperative or solitary nature of the various activities, the degree of standardization of the subject matter itself, the degree of connection between teaching and research, and so on. The great danger is that these many potential dimensions of diversity would be artificially homogenized by the uniform application, within a uniform technical and administrative framework, of a simplistic metaphor such as "instructional delivery". If "instruction" is conceived as a homogenous data-stuff to be delivered to couch potatoes then the central pedagogical opportunity of the technology will be altogether missed. On the other hand, another danger is that teaching, like research, will be pulled in a hundred directions as technologies are developed that respond to the diverse inherent properties of the various subject matters, and that the university will be torn to pieces as a result.

To reconcile these tensions, we need to make visible two aspects of the university that are too often neglected. The first of these might be called the informational substrate of the university: the wide range of generally uncoordinated services that provide informational support of one kind or another to the university's teaching mission [3]. These include the library, instructional networking and computing, the telephone system, media services, the campus bookstore and course reader service, the course catalog and schedule and many other paperwork-handling offices, and so on. Faculty pay little attention to these services because their relationship to teaching is uniform and changes slowly. With the rise of networked computing, however, and particularly with digital convergence, these services can support teaching in a greater variety of ways, so that teachers can adopt a wider range of pedagogical models. This will require that the informational substrate be coordinated so that representatives of this substrate organization can negotiate class by class what package of support they will provide, what pedagogical problems this support will actually solve, and what it will take to staff and administer the result [4]. The key is that actual pedagogical problems be solved, and that instructors have a clear contract that guards against literature classes turning into computer classes or physics classes from being disrupted by telecommunications breakdowns.

The second area of the university that needs to be made visible pertains to professional skills. Universities have not systematically taught the non-substantive, process-oriented skills that both classes and employers increasingly require: teamwork, consensus-building, professional networking, library searching, event organizing, online conferencing, basic Web site construction, brainstorming and innovation processes, study skills, citation skills, and so on. When these skills are taught at all, they are rarely made into required courses and are usually fragmented among a variety of marginalized units such as the library and the career center. In order for faculty to conduct classes according to a greater diversity of technology-enabled methods that connect the places of learning and work, however, they will need clear contracts about the relevant professional skills that students will bring with them.

Once these two dimensions of the university are made visible, of course, it will be possible to ask how they should be organized. Once again economies can probably be achieved by merging some functions across campus boundaries, and once again a leading principle will be to standardize only the differences between universities that make no difference. But which differences are these? In the current system, each campus can develop its own distinctive philosophy and culture, and it is most important that we not accidentally standardize these differences away by standardizing the details of the informational substrate and professional skills training where those distinctive approaches live. We know little about these matters, for the simple reason that we have never been compelled to find out. Now that we are compelled to find out, we are probably going to discover that we have not been supporting our educational philosophies to anything like the degree that we would want, and the shock effect of new technology will be salutary if we decide to do something about this.

In the end, one of the most important outcomes of the process will pertain to the students' own personal and professional development. College has long been understood, in the United States anyway, as a place for students to discover themselves. Students do not automatically know who they are, and they find out who they are by moving back and forth between trying to explain it to themselves and others -- a professional skill that should be taught from high school if not before -- and joining into the activities of one professional community after another through the relatively safe and convenient proxy places of the classroom. Students who have some idea what they want to do with their lives will make better choices, they will know what they want from a given class, and the instructor can work backward from the students' life plans to what the course could usefully be. The newly reinvented university should be able to facilitate this kind of self-discovery, and should not undermine it by fragmenting itself in a hundred incompatible directions. Managing this tension will be a central challenge.

Let us draw some conclusions about governance in the putatively new world of the networked university. First, there is no simple thing called decentralization. Decentralization requires a framework of standards, and standards require a center. Centralization can thus happen by surprise, and decisions about such esoteric matters as desktop software can erupt into controversies about control over the future development of the places of teaching and learning.

Second, it is crucial to design technology and governance at the same time. For example, the rise of multimedia courseware has led some university administrations to propose diluting instructors' traditional copyrights in their own teaching materials: if universities make substantial investments in multimedia courseware production, they figure, then universities should own the results (Scott 1998). But these high costs may simply reflect an early, transitional phase. It would be disastrous to change copyright rules if multimedia courseware development tools are going to become as routine and cheap as the desktop computers and library tools that faculty employ in designing their courses now. If any elements of multimedia production are not likely to become cheaper with time, then analysis should identify them now, so that they are not institutionalized without adequate reflection.

Third, whatever endpoint we imagine for the networked university, the university community will experience major problems of both technology and governance in getting from here to there. Some of the dangers derive from network effects (Shapiro and Varian 1998): many potential innovations will be impractical until a critical mass of campuses are using them, and once a critical mass is achieved, the benefits of joining the club are likely to overwhelm any reasons to pioneer an alternative direction. As a result of this dynamic, the choices made by early adopters can be fateful for everyone else. The first universities and research communities to adopt new technologies and institutional forms will therefore have a great responsibility to everyone else. For example, if new technologies of resarch coordination are first developed by high-energy experimental physicists, then those technologies may be adapted to the unusual attributes of research in that field: very large projects with the attendant bureaucracy, high funding levels and the attendant politics, high levels of social cohesion, a deeply developed consensus about theoretical vocabulary and research assessment, and so on. Most fields do not fit this pattern, and yet network effects may cause technical and process standards from high-energy physics to spread to other fields whether they are appropriate or not. The World Wide Web, originally developed at CERN, does provides a reassuring precedent, but the next standard to spread from the high-energy physics community may not generalize as well.

Finally, I want to emphasize that my argument has been essentially normative. It has also been incomplete. The forces that encourage higher education to standardize its technologies and institutions interact with other forces that may push in other directions. Information technology is uniquely malleable, and it is easily shaped by the ideas and interests of whatever institutional coalition has the wherewithal to guide the development and implementation of new systems (Danziger, Dutton, Kling, and Kraemer 1982). The design space is thus political as well as technical, and the future is not foretold. Serious conceptions of education might emerge from many quarters, and several constituencies -- faculty senates and professional societies, for example -- have at least the formal means of organizing around them. A predictive theory of the politics, however, would require more concepts than I have developed here.

Will we have a revolution in the university? I hope not. Revolutions are destructive. By caricaturing the old and idealizing the new, they falsely posit an absolute discontinuity between the past and the future. The twentieth century saw enough of that, and even an elementary analysis has demonstrated that things are more complicated. In fact, information technology creates little that is new. It can amplify existing forces, it can increase efficiency by collapsing meaningless differences, it can decentralize some things, and it can centralize others. It does not automatically dissolve power, and it does not eliminate the need for governance. Indeed, if issues of power and governance are neglected then it can lead to catastrophe. It is both a product and an instrument of human choice, and it leaves the burdens and dangers of choice squarely in human hands. If universities are to remain a foundation of a democratic society, then it will be necessary to make those choices wisely.


[1] On economies of scale in higher education see Goodlad (1983) and Maynard (1971).

[2] See, for example, the project at Newcastle University to use online discussion groups to connect groups of medical students, including those doing their clinical rotations, to one another and to the university (Hammond 1997?). See also <>.

[3] This notion is related to, but different from, Bates' (1999) concept of the "invisible substrate" of the field of information science, which consists in its reflexive, meta-level relationship to all of the other fields of study.

[4] A more limited version of this proposal appears in Chapter 4 of the Follett Report (Joint Funding Council's Libraries Review 1993), which has influenced the development of university information services in the UK. For a note on subsequent experience see Rusbridge (1998). For the US experience with "integrated information centers", see the four papers from the University of Minnesota in Bonzi (1993).


I appreciate helpful comments by Rob Kling, Richard MacKinnon, Cosma Shalizi, and an anonymous referee. This paper was originally presented at CENIC'99 (Corporation for Education Network Initiatives in California), Monterey, May 1999.


Philip E. Agre, Information technology in higher education: The "global academic village" and intellectual standardization, On the Horizon 7(5), 1999.

Daniel Alpert, Performance and paralysis: The organizational context of the American research university, Journal of Higher Education 56(3), 1985, pages 241-281.

Marcia Bates, The invisible substrate of information science, Journal of the American Society for Information Science 50(12), 1998, pages 1185-1205.

Gregory Bateson, Steps to an Ecology of Mind, New York: Ballantine, 1972.

Susan Bonzi, ed, Proceedings of the 56th Annual Meeting of the American Society for Information Science, Columbus, OH, October 1993.

John Seely Brown and Paul Duguid, Universities in the digital age, in Brian L. Hawkins and Patricia Battin, eds, The Mirage of Continuity: Reconfiguring Academic Resources for the 21st Century, Washington: Council on Library and Information Resources, 1998.

Diana Crane, Invisible Colleges: Diffusion of Knowledge in Scientific Communities, University of Chicago Press, 1972.

Michael R. Curry, The Work in the World: Geographical Practice and the Written Word, Minneapolis: University of Minnesota Press, 1996.

James N. Danziger, William H. Dutton, Rob Kling and Kenneth L. Kraemer, Computers and Politics: High Technology in American Local Governments, New York: Columbia University Press, 1982.

Thomas A. Finholt and Gary M. Olson, From laboratories to collaboratories: A new organizational form for scientific collaboration, Psychological Science 8(1), 1997, pages 28-37.

Sinclair Goodlad, ed, Economies of Scale in Higher Education, Guildford, UK: Society for Research into Higher Education, 1983.

Geoff Hammond, Megan Quentin-Baxter, Paul Drummond, Terry Brown and Reg Jordan, Student support and tutoring: A role for simple applications of computer mediated communication (CMC), Faculty of Medicine, University of Newcastle, undated (1997?). Available on the Web at <>.

Joint Funding Council's Libraries Review, Report (The Follett Report), Bristol, UK: Higher Education Funding Council for England, 1993. Also available at <>.

Jean Lave and Etienne Wenger, Situated Learning: Legitimate Peripheral Participation, Cambridge University Press, 1991.

James Maynard, Some Microeconomics of Higher Education: Economies of Scale, Lincoln, University of Nebraska Press, 1971.

William J. Mitchell, City of Bits: Space, Place, and the Infobahn, Cambridge: MIT Press, 1995.

Nicholas Negroponte, Being Digital, New York: Knopf, 1995.

Eli M. Noam, Electronics and the dim future of the university, Science 270, 13 October 1995, pages 247-249.

Walter W. Powell and Paul J. DiMaggio, eds, The New Institutionalism in Organizational Analysis, Chicago: University of Chicago Press, 1991.

Gene I. Rochlin, Trapped in The Net: The Unanticipated Consequences of Computerization, Princeton University Press, 1997.

Chris Rusbridge, Towards the hybrid library, D-Lib Magazine, July/August 1998.

M. M. Scott, Intellectual property rights: A ticking time bomb in academia, Academia 84(3), 1998, pages 22-26.

Sylvia Scribner and Michael Cole, The Psychology of Literacy, Cambridge: Harvard University Press, 1981.

Carl Shapiro and Hal Varian, Information Rules: A Strategic Guide to the Network Economy, Boston: Harvard Business School Press, 1998.

Susan Leigh Star and Karen Ruhleder, Steps toward an ecology of infrastructure: Design and access for large information spaces, Information Systems Research 7(1), 1996, pages 111-134.

William A. Wulf, The collaboratory opportunity, Science, 13 August 1993, pages 854-855.