Some more notes.

As a periodic reminder, information about unsubscribing from RRE can
be found on the Web at:

That same page includes links to Web-based archives of the list and
to most of the longer essays by other people that I have circulated.

In addition, most of the responses to my query about running Emacs on
a personal computer can be found at:

This will also be of interest to anyone who has a Macintosh, Windows,
NT, or Linux machine and who might be wondering why I keep wondering
how life could be possible without Emacs.

Having laid out my business plan for, I was shocked
to discover that somebody already owns that domain name.  I don't
know why, given that just about every English word is a domain name
by now.  So I suppose I should issue a disclaimer that I have nothing
to do with the person who owns  (I had checked for
a Web page, but had not thought to check the host
tables.)  I'm sure that the domain is worth $$$$$$,
but I'll have my lawyers talk to their lawyers and we'll sort it out.

I am unimpressed with most of the technology-driven scenarios for
the future of higher education, and the main reason is because they
are not radical enough.  Most of them incorporate superficial ideas
about education.  Part of the problem is the widespread idea that
technology should produce something radically new.  It is much better,
in my view, to use technology to deepen the existing values of the
institution.  When institutions are stable over a long period, nobody
is challenged to think deeply about their meaning.  As a result, the
practice of articulating the deep values of the university is largely
left to the provosts who give graduation speeches.  It's a nice spring
day, the faculty and students have their medieval regalia on, the
parents are absolutely thrilled like you can't believe, the provost
stands up and speaks in a ritual manner for just long enough to keep
everyone from glazing over, the assistant deans start handing out
the diplomas, the provost sneaks off to the next of the day's ten
meetings, and nobody thinks about the matter again until next year.

Nobody is acting in bad faith here.  Everybody is attending to the
business that is arising for them in the world is as it is.  But now
things are changing.  New opportunities are arising.  It is becoming
imaginable to organize the university in a wide variety of unfamiliar
ways.  All sorts of people are stepping forward, all sorts of them,
and all of them have their own ideas about the university.  Some
of them are ideologists; some are crackpots; some are single-issue
activists who have no idea what it means to establish or run a real
institution.  Most of them have narrow ideas of what the university
is about.  Some of them are driven mainly by negativity, such as the
people who for some weird reason dislike college professors so much
that they will not even admit that such a thing called "teaching"
should even exist in the new world that they are planning for us.
And then there are serious ideas, thoughtful fragments of ideas that
cut through the noise and actually say something.  Faculty are by and
large uninvolved in this debate.  The existing institutions provide
them with a stable framework for doing what they are good at doing.
The institutions are very good at focusing their attention on issues
and problems that are defined in certain terms.  The institutions
are terribly efficient that way, so efficient that nobody has time to
change them.

So the field tends to be left open to people -- some of them acting
in good faith and some of them not -- who don't know much about the
reality of college, who don't know the students, and who don't teach
in classrooms.  The idea that technology radically changes things is
misleading when it directs our attention away from the actual sites
of teaching practice, and from what can be learned in those sites by
reflecting deeply upon the practice that goes on in them.  Shallow
reflections on the existing practice will give us shallow technologies
that simply pave the cow paths, doing just what we're doing now only
with a lot of extraneous hardware.  Deeper reflections, however, can
remind us of why we're doing this in the first place, and tell us how
we can change both the technology and the institutions to put back in
motion the same drives that led us to do things this way as the best
available compromise within the limits of the old system.  Once upon
a time this idea was called conservatism, back before conservatism
devolved into nothing more than an excuse for being mean.  What I
want, then, is to explore a radically conservative approach to the
technology and institutions of higher education.

I want to draw out a few concepts that, in my experience, are central
to whatever real learning takes place in college classrooms.  I am
not happy with the prevailing concepts, which are polarized around an
opposition between understanding and doing -- for example the conflict
between constructivists on the one hand, who try to rig up experiences
in which students supposedly rediscover things that it took human
society tens of thousands of years to discover, and vocationalists
on the other hand, who treat every skill as an industrial process to
be learned by rote.  Students should learn to do some real thing, and
understanding can then develop as a reflection upon their iterative
attempts to do it well.  Although many concepts are useful for talking
about this process, I will nominate four, four C's and a G: concepts,
cases, community, critical, and genre.  Students learn, I submit,
by applying a set of concepts to a series of cases, practicing the
skill of parsing each case analytically for what a given conceptual
framework would have us find in it, doing so within both a local
community of learners and a global community of practice, acquiring
in particular the skills of parsing these situations reflexively
with a repertoire of critical concepts, and learning to produce good
examples of the particular genres of documents (in whatever medium)
that are a traditional part of that particular community's practice.

An obvious example of this system is in law, where students practice
applying certain bodies of law to a series of cases until they can
parse a case and write a case analysis in their sleep.  But examples
are found everywhere.  In physics one learns one system of concepts
after another, and then one applies the concepts to a battery of
word problems ("a mass M traveling with velocity v on a frictionless
surface is struck by a mass M' traveling with velocity v' ..."),
the result being a worked-out solution.  The categories of grammar
(noun, verb, adjective) are another conceptual framework, as are the
categories of logic and those of rhetoric -- or at least any given
grammatical, logical, or rhetorical system, of which there might be
many.  Robert's Rules of Order are prescriptive rules, but before
that they are a conceptual framework for the conduct and reporting
of meetings.  In each case one acquires one set of concepts after
another, practices applying them to cases, learns to produce documents
in a particular genre, and thereby joins a certain community.  In each
case critical reflection is typically in short supply, but in each
case that can and should be profitably fixed.

Now the kind of teaching and learning that I have described is not in
fashion.  Conservatives oppose it because it involves understanding,
which conservatives are apparently against.  I wouldn't have ascribed
such an absurd view to them or anyone, except that they themselves
affirm it quite openly every time they organize protests against the
use of meaningful word problems in grade school math classes.  They
actually use slogans such as "fuzzy math" to name the idea of applying
mathematical concepts to real examples, as opposed to limiting math
teaching strictly to the usual algorithms for arithmetic calculations.
Go figure.  Liberals, for their part, will be appalled at the high
degree of structure in my conception of education.  They don't like
structure because it is oppressive, because it inhibits the natural
creativity and stifles the natural love of learning that all real
students supposedly bring with them to the classroom.  They will
incorrectly interpret my prescriptions as a set of mechanical rules.
But mechanical rules don't work.  What's needed instead is a kind
of conceptual scaffolding for the construction of self-sufficient
understandings of any given case.

Conservatives want right answers above everything, and liberals want
freedom above everything.  Both views are misguided.  Learning is
really a matter of joining a community by learning its language and
practices.  And we can use technology to close the institutional
and geographic gap between newcomers and oldtimers in communities
of practice by using concepts and cases as a go-between.  Those who
know the literature on these topics can reconstruct many of my views
by combining three very disparate theories of knowledge.  One is the
communities of practice theory that I mentioned here before, and which
has been applied to the technology-enabled reconstruction of higher
education by John Seely Brown and Paul Duguid; see their article,
"Universities in the digital age", in Brian L. Hawkins and Patricia
Battin, eds, The Mirage of Continuity: Reconfiguring Academic
Resources for the 21st Century (Council on Library and Information
Resources, 1998).  The second is Vygotsky's theory of learning through
the symbolically mediated internalization of the relationships and
practices of a culture; for a comprehensible introduction see the
works of James W. Wertsch.

The other theory that I have in mind is Doug Lenat's brilliantly
strange artificial intelligence theory of heuretics.  Having been
trained in AI, and having worked my way through the fine details
of all of the respectable work in that field up through my departure
from it several years ago, I've decided that, at the end of the day,
the best work in AI is the most cracked.  That means Marvin Minsky's
bizarre work of genius, "The Society of Mind", and Doug Lenat's work
on a program called Eurisko.  Doug is better known for his grandiose
plan to create an intelligent digital encyclopedia called CYC, but I
have in mind his earlier, more radical experiments.  These experiments
will not be familiar, so I will explain the relevant aspects of them.
For the full details, see Lenat's articles in the journal Artificial
Intelligence 21(1), 1983, entitled "Theory formation by heuristic
search" and "EURISKO: A program that learns new heuristics and domain
concepts".  I should hasten to say that I do not think the world has
much, if any, place for intelligent computers.  That is not why I am
interested in Lenat's work.  Rather, I am interested in his radical
extrapolation of some basic computer science representational ideas.

Start with the conceptual framework that I mentioned above: concepts,
cases, community, critical thinking, and genres.  Not just those
words, but the concepts that I explained.  Then notice that what
I presented is a conceptual framework for describing conceptual
frameworks.  I suggested how all fields of study might be described --
not exhaustively to be sure, but usefully -- using the same conceptual
framework.  This kind of reflexivity is what Lenat's theory is about.
To see its value, consider what a good conceptual framework gives
you.  Once you have analyzed a hundred cases using the same conceptual
framework, you can get good at it.  You can start to notice patterns.
You can use these patterns to prepare a taxonomy of the cases.  That
taxonomy will itself be a new conceptual framework that relates in a
well-defined way to the old one.  You can invent concepts that group
the cases in new ways.  You can notice cases that seem like outliers,
and you can pay special attention to them.  You can compare and
contrast the cases.  You can develop analogies among the cases, and
you can use the analogies heuristically by taking observations from
one case and asking what they correspond to in another case -- maybe
they point out something you hadn't noticed before.  The points of
comparison and contrast that you notice can then be generalized and
used to enrich your conceptual framework.

Those are some of the benefits that "you", an individual, can get
from a conceptual framework, but in fact conceptual frameworks live
in communities -- the community of lawyers, for example, or physicists.
Once a community has formed around a conceptual framework then a
consensus can develop about what a good analysis of a case is like,
and which are the important cases, and which are the useful concepts.
Subcommunities can then develop around further elaborations of the
concepts.  Community members can prepare curriculum materials using
the conceptual framework, and they feel confident of knowing what
a right answer is.  No longer need we pretend that the right answers
are universally and objectively right; rather they are the answers
that the community regards as right, provisionally and defeasibly,
based on its good faith attempt to articulate what works well in its
own engagement with real cases in real activities -- based, that is,
in its practice.

My radical proposal that every community of practice in the world make
its conceptual frameworks explicit, and that we use the Internet to
organize these communities around a digital library of XML documents
that store a vast archive of all of the cases that the community's
members have analyzed.  We would need a document type definition for
each genre, and no doubt we would need specialized indexing and search
tools for each community as well.  But the basic conceptual framework
of conceptual frameworks would provide a template from which community
specific tools could develop.  In fact, a community could develop
around the study of communities and their conceptual frameworks,
and another related community could develop around the constructions
of technology-enabled institutions to support communities and their
practices.  Members of this community might develop new conceptual
frameworks for describing the workings of various communities, and
then having developed a corpus of case studies, they might then
compare and contrast the workings of the various communities.  Best
practices and the methods for their identification would themselves
be a conceptual framework and the basis for a practitioner community.
Consider the implications of that: a generalized, domain-independent
framework for identifying best practices, with a library of cases that
accumulates over time.  The best-practices community could develop
taxonomies of best practices, heuristic devices for discovering best
practices, analogies among best practices in different fields, ways of
evaluating how good the best practices really are, and above all best
practices for the identification of best practices.  That, weird as
it is, is the power of the reflexive approach.  In doing all of this,
we would make explicit on a grand scale the modularity of knowledge.
The digital library of cases would be an extremely valuable resource
for both practitioners and learners, and each community would develop
its own peer review procedures for admitting case analyses into their
various case collections.

The analysis of various cases provides insight into the nature of
conceptual frameworks.  Consider, for example, the simple conceptual
frameworks for which management consultants are famous, most of which
seem to sort the possible cases into simple two-by-two matrices.
Or consider another conceptual framework, the observation-decision-
action loop that was developed for fighter pilots by the late John
Boyd, a legendary Air Force officer who wrote the book on the subject.
(See <>.)  Newt Gingrich
later applied that same conceptual framework to the day-to-day conduct
of politics.  Or, finally, consider the simple conceptual frameworks
that are to be found in any textbook of public relations, such as
the methods for "identifying organizational linkages to publics" in
chapter 7 of James E. Grunig and Todd Hunt, Managing Public Relations
(Holt, Rinehart and Winston, 1984).  In each case, the conceptual
framework itself does not look like much in abstraction.  None of
them offers the intrinsic satisfactions of Galois theory, existential
phenomenology, or neoclassical economics.  Rather, their value lies
in the richness of a set of cases that one accumulates over time by
applying the same conceptual framework over and over, then noticing
the patterns.  It is a deep fact about the human mind that we have
ideas by noticing analogies, and we notice analogies by putting
the same name on different things.  The major purpose of conceptual
frameworks is precisely to put the same names on a long series of
different things.  Every case we parse becomes another source of
potential precedents when considering new cases that come along.
This kind of learning by connecting precedents is part of Lenat's
theory, and it is also central to Roger Schank's theories of learning
as well.  For those who follow such things, it differs from classical
psychological theories of "transfer" in that it describes the parsing
of cases as an active process that arises through the internalization
of a social activity that is mediated by a cultural construct, in the
form of the conceptual framework.  This is Vygotsky's version of the
theory.  Until the activity of parsing cases it is internalized, it is
not at all automatic.

My scenario for the future of higher education really becomes radical
as we start to enumerate some of the conceptual frameworks that
will be required -- each with its own community, its own genres for
representing case studies, its own section in the digital library,
and so on.  We will need conceptual frameworks for process skills
like organizing projects and building social networks.  We will need
conceptual frameworks for critical reading and thinking skills like
parsing the logic of an argument, finding the metaphors in a text,
reconstructing the history of an idea, and analyzing the ways in which
a given practice is located in the institutions around it.  Some of
these conceptual frameworks and their associated skills are obviously
more advanced than others, but the community around a given conceptual
framework is free to establish a list of conceptual frameworks whose
mastery is a prerequisite to entering their community.  Graduating
from this kind of training is no longer like leaving something; it
is more like joining something, and everyone will have a collection
of membership cards for the communities that they have successfully
joined.  Every community will be self-organizing, issuing its own
membership cards by its own rules, and anybody who comes up with
a new system of concepts that they like will be free to create a
new community anytime they want.  All of the necessary software to set
up an online community infrastructure will be readily available and
effortlessly interoperable with all of the existing infrastructures.
Communities can undergo schisms if different subcommunities feel the
need to evolve their concepts in different directions.  The whole
system will be like the scientific community, except that it will
include the whole of higher education, and indeed a large proportion
of all of the activities that take place in every sector of society.
Barriers between learning and doing will fall.  Students will get
a clear line of sight to the communities they want to join, and
practicing community members can stay involved in the university by
(for example) jurying students' case reports.  Knowledge can multiply
and accumulate, and learning will accelerate as students internalize
the analytical practices they will need.  The world will prosper.

Now that is a radical scenario for the future of the university.
It makes clear just how thin so many of its strident competitors
are: little more than superficial ideas about the market applied to
superficial ideas about learning.  By getting in underneath the whole
structure of human knowledge and making it visible to many varieties
of reflexive thinking, this new proposal promises to dramatically
amplify collective intelligence.  But do I really believe all this
radicalism?  It's complicated.  The basic idea is to embrace and
amplify the logic of ontological standardization that I have described
elsewhere.  If the proposal works right, it's because it standardizes
the right things.  And as computer people well know, standardizing
the right things is a powerful tool that enables society to diversify
everything else.  But we can't really evaluate the proposal as it
stands without a more worked-out conception of institutional issues.
Brown and Duguid, for example, proposed putting the teaching and
evaluation functions in different organizations, but I am strongly
opposed to any centralization of the evaluation function because
whoever evaluates students and thus hands them their credentials has
complete power over everyone else.  Once a given credential becomes
established as a standard, for example because employers build their
processes around the assumption that their new hires have it, it
will be hard to replace.  Network effects are crucial.  Communities
of practice will probably exhibit network effects, in that larger
communities will attract new members more easily than small ones.  But
these effects may be limited beyond a certain point because different
conceptual frameworks really are useful in different environments.
Likewise, the difference between a community with a library of 100,000
case studies and a community with a library of 100,000,000 case studies
is probably not so great for most purposes.

A more serious issue pertains to the internal structure of communities
of practice.  Right now, the university world has a matrix structure.
There is only one field of physics, but the practice of this field
is organized and taught at many hundreds or thousands of universities.
Each university can develop its own approach, and a mania that
grows at one university cannot easily spread to others.  Likewise, a
spread-out network of physicists that develops a distinctive approach
to their subject cannot easily be subdued by a single university,
since all of the members of the network can reinforce one another,
looking around as individuals for friendly environments while still
maintaining their collective identity.  These kinds of decentralization
are conditions for the self-organization of the current system: the
system is less likely to fall into a single overwhelmingly dominant
approach without a good reason.  If we lose the matrix structure of
the university world, then we will need new mechanisms to preserve
that loosely integrated self-organization.  I think that this is
another weakness of Brown and Duguid's proposal, as well as of my own.

Nonetheless, despite the wild-eyed nature of my proposal, I hope that
it makes clear the strategy of radical conservatism that I sketched at
the outset.  My intention is not to throw out the university that we
know, but rather to start over from a rational reconstruction of what
is important about the university.  In fact I have only picked out
one element from a larger picture.  I also think that the university's
existing role as a manufacturer of social networks will be increasingly
important in the future as well, and any radically conservative design
for the university should use technology to amplify that role too.
The university should have permanent seminars.  Once you enter the
university, you should never have to leave.  As you move from newcomer
to oldtimer in your various communities of practice, you can play
more and better roles in the community, and these roles will keep your
social networks up to date, as well as your intellect.  The university
will truly be the hub for the intellectual lives of a whole society,
or at least those people in society who care to have that kind of
relationship to a university.  That very possibility is perhaps the
greatest reason to ensure that we retain a diversity of university
institutions.  The dangers of an intellectual monoculture are great,
and the if we submit ourselves to simplistic market models then the
economics of information may well turn us into a monoculture in short
order.  I am not opposed to markets, just to the pathologies that they
can get themselves into when they are designed badly, especially where
information is concerned.  I see little or no comprehension of those
potential pathologies in the many facile proposals for a market in
higher education, and I wish I did.

Ultimately, I hope that we will see a diversity of models.  There are
two organized ways of generating knowledge in society: the market and
peer review.  I have started from peer review because it provides a
much more sophisticated model of the relationship between knowledge
and social processes, and because it is more congruent with the values
of openness.  But market mechanisms will probably have a variety of
roles to play in the various communities.  If it is still hard to
imagine what that variety of roles might be like, that is because we
still understand very little about the relationship between the peer
review and market models.  That relationship is endlessly negotiated
in many contexts -- in the private sector just as much as within the
university.  Every time the patent lawyers and venture capitalists
start distorting the principle of open publication in the university,
something like the open source software model irrupts in the middle
of the market.  It's a complicated dance, and we mustn't fall into a
priori ideas about it, such as the ones that we see in proposals for
marketplaces in standardized learning modules.  Knowledge is richer
and more complex than that, and I hope that my scenario gives some
hint of what a proper engagement with that richness might be like.

In my last set of notes, I snorted at the ideological people who
promote hatred of college professors.  That does not mean, however,
that I uncritically embrace everything that happens in the university.
An example would be the unfortunate manner in which many American
scholars have imported the French poststructuralist thinkers of the
1980s.  Most of those French thinkers were valid intellectuals with
useful things to say, but their forbidding rhetorical style makes it
easy to collapse distinctions and promote stereotypes.  Many of the
French thinkers have had a positive influence on critically-minded
scholars who can apply the ideas selectively without trying to
copy the whole style in a superficial way.  The problem is that the
French presuppose an audience that, having been through the French
educational system, has had decades of rigorous training in reading
philosophy and writing essays.  It is thus that Americans, who have
typically had neither, sometimes misunderstand everything they read.
Thus, on the one hand, one routinely encounters in American newspapers
the embarrassing idea that deconstruction is the doctrine that a text
means whatever you want it to mean, or the confusion of deconstruction
with reader-response literary criticism.  And then, on the other hand,
one sees academics copying Foucault's writing style, which Foucault
himself eventually more or less admitted served no useful purpose.

Perhaps the saddest consequence of the American appropriation of
poststructuralism is the current alarming state of architectural
theory.  The profession of architecture has long had a problematic
social divide between a handful of stars who drive the theoretical
discourse and the great legions of working architects who end up
having to follow the fashions that are set at the top.  (See for
example Garry Stevens, The Favored Circle: The Social Foundations
of Architectural Distinction, MIT Press, 1998; and Roxanne Kuter
Williamson, American Architects and the Mechanics of Fame, University
of Texas Press, 1991.)  Today that unfortunate trend is as extreme
as ever, and the leading journal of architectural fashion-theory is
ANY, which stands for Architecture New York.  Some of the people who
have published in ANY are sensible scholars such as Saskia Sassen,
even though it must be said that they have not always been using that
venue to publish their most thoughtful work.  But for the most part,
ANY is given over to an exceedingly uneven body of poststructuralist
theoretical writing that is completely inaccessible to normal people.

An example of the regrettable transition is provided by a historian
of ideas named Sanford Kwinter.  Once upon a time, back in the 1980s,
Kwinter cofounded an annual publication called Zone that published
interesting if very uneven volumes about urban planning and the body.
At the time it all seemed very eighties, what with the extremity of
Wall Street and the white-collar looting of savings and loans and
the final show-down between the lunatic protagonists of the Cold War.
But in fact most of the articles in Zone were written in reasonably
plain language.  In the first issue of Zone, for example, Kwinter
published an article on the Italian futurist architects called "La
Citta Nuova: Modernity and continuity".  It was about their rather
interesting ideas about modern physics and the way they applied their
ideas to proposals for train stations and the like, which they thought
should express a sense of energetic attraction and circulation in the
city rather than the stasis of previous forms.  Worth knowing about.
In a recent issue of ANY, by contrast, Kwinter published an article
entitled "The genealogy of models" which is just ... sad.  Clearly
inspired by the work of the French philosopher Gilles Deleuze, it
appears to be written with the intention that every single sentence,
indeed every single word, should announce a spectacular cosmological
ontological discontinuity of world-historical proportions.  It's
all very portentous in such a way that graduate students often feel
obliged to pretend that they understand it.  It does flow in a certain
way from Deleuze, who despite the wrongful reputation of contemporary
French philosophy is a serious scholar and quite the realist.  You
can work some meaning into every sentence if you try hard, and if
you let him get away with a lot of sloppy and vague uses of words that
really ought to be used in a clearly defined relation to philosophical
tradition.  But really it makes no sense.  And most of the articles
in ANY are like this, like for example the deconstructionist writing
of a prominent architect named Peter Eisenman.  (I mean, what on earth
is the point of deconstructing architecture?)  It wouldn't be so bad
if ANY was just a fringe player in the great theoretical bazaar.  But
no; it (or the trend it reflects) has just gotten done driving out of
business an architectural journal called assemblage, which has passed
from fashion because its articles were written about real buildings.

So what is to be done?  The ideologues stereotype professors because
they want to organize a purge.  They say so in their fund-raising
letters to their donors.  They know that their work can't cut it on
the intellectual merits, so they pretend that the deck is stacked.
The fact is, those relatively few conservative scholars who do write
useful, serious work succeed just fine, and show up the ideologists
for what they are.  Of course, those conservative scholars all lie low
when the ideologies fulminate against "intellectuals" and "academic
elites".  They don't step up and say, "hey, we're intellectuals too,
you know", any more than conservative journalists take exception when
the pundits rail in a generalized way against "the media".  They all
know what the game is.  The excesses of the poststructuralist epigones
make good raw material for the mass-manufacturers of stereotypes --
an industry with its customers, its technologies, and its products.
But we should resist the whole game.  The question is not whether
foolish ideas exist.  Foolish ideas have existed since ideas first
existed.  Complaints about relativism date to ancient Greece, and
the versions of those complaints that we encounter today are wildly
unoriginal -- much less original than the putatively relativistic
contemporary ideas they are complaining about.

The ideologues want to politicize the institutions of learning under
the guise of accusing their opponents of having done the same.  This
kind of projective aggression is their longstanding stock-in-trade.
In fact, the institutions work reasonably well.  Deans make their
names by assembling fashionable new areas of intellectual strength,
and in those areas where conservative scholars have anything original
to say, deans line up to hire them.  And conservatism is fashionable.
How could it not be, with scores of conservative pundits filling every
available medium?  I've seen conservative authors submit illiterate
papers for publication and then complain that they were discriminated
against by fashionable powers that be.  Whatever happened to personal
responsibility?  The institutions more or less work, although they
could certainly work better, and they work in part because they are
decentralized.  There's plenty of room in the institutions for any
new intellectual movement to build itself up and put its case to the
larger community.  If those scholars who have problems with the more
unfortunate of the intellectual fashionlets commit their problems
thoughtfully to print, then that's part of the process.  These things
are decided in the end by the smartest prospective graduate students,
who generation after generation demonstrate an uncanny ability to move
into the programs where the real intellectual energy is.  And it's
better to have graduate students decide things than the direct-mail
commissars and their friends the pundits.  Refute bad work; don't
purge it.  And don't fall for the stereotypes that portray academics
as all of a piece.

When the war in Kosovo ended, I made a batch of notes toward an essay
about it.  But somehow it didn't seem time to write that essay.  Now
that I look back on the notes, I see that some of them have aged a
little better than others.  So instead of an essay, a few fragments.

Like most Americans, I was heartened at the lovely reception that
we all gave to the Kosovar refugees.  We put them in nice facilities,
told the soldiers to play ball with the children, got a batch of
psychologists in there to deal with the appalling traumas, and
placed people with relatives and church groups.  Communities adopted
refugees.  People turned up at the camps to donate stuff.  It was
great.  We should remember all of that and turn it into a regular part
of the culture.  "Welcome to the neighborhood, Mr. and Mrs. Rivera.
We're so glad you came.  Listen, we've got a little apartment for
you here, and Mr. Chin from the church group has a job for you in his
shop.  It's not much, I'm afraid, but maybe it'll do until you get on
your feet.  We're so sorry about the American-trained paramiliaries
who tortured your daughter.  We were working on getting that school
shut down right before you knocked, in fact.  Meanwhile, just let us
know if you need anything."  People who dislike immigrants are just
weird.  You know?  I wish that the world were twice as big as it is,
so that we could get immigrants from twice as many places.  I have
a long wish list of immigrants who the United States could use more
of: Hungarians, Uighurs, Tuvans, Tswana, Cariocas, Spaniards, and so
forth.  And all the Tibetan monks living in camps in India, and all of
the traditional crafts people in Japan, and all of the electric guitar
players in Zaire.  I want Nigerian restaurants to become as common in
American cities as Indian and Chinese restaurants are now.  We should
pay these people to come here.

Every time something like Kosovo happens, the media start a fight
between two theories of it: the ancient tribal hatreds theory and the
modern political machinations theory.  It's always those two and no
more.  The problem, of course, is that both of them are always true.
Ancient tribal hatreds only live in the present day because someone
decides they can benefit by poking at the wounds, and modern political
machinations always draw on historical memory in selective ways for
their emotional raw material.  Unless you put the two partial theories
together, you do not have a clue as to what is happening.  That is one
reason why I so enjoyed Noel Malcolm's book "Kosovo: A Short History"
(New York University Press, 1998).  I bought it from a bookstore in
Vienna and read most of it while sitting in a Greek restaurant eating
fish and pita bread, and then on a long plane ride.  It was finished
literally months before the war began, and its last few pages predict
that the war would happen and that it would be as awful as it was.
What's most striking is the sense that the author knew that his book
would be read very closely by the combatants in a horrible war, and
that his book could make things either better or worse depending on
how credibly he reconstructed 500-year-old events in places with names
like the Plain of Blackbirds.  You can feel this powerful sense of
responsibility on the page.  His problem, of course, is that nobody
really knows what happened on the Plain of Blackbirds.  All anybody
has is conflicting fragments of evidence, mostly in the form of
traditions that were handed down orally for centuries before being
turned into the raw material for the modern political machinations of
one generation after another.  So he sifts, and he sifts, and he talks
about the different versions of the story, and about the generic forms
in which historical songs happened to take in one century or another
afterward, and how one variant probably arose a hundred years later
because several other songs in that same period changed in the same
way to fit with the fashions of the time, and so on.  This sifting
requires page after page after page -- no matter what the title says,
it is not a short history -- until something like a plausible story
emerges.  The whole book is like that.  Kosovo was a battlefield over
which one army after another got pushed and pulled, and he traces
every episode with the knowledge that this stuff really matters to the
descendants of the people involved.  Now, if you were listening to the
fashionable hype about the current state of historiography, then you
would be inclined to hold this stuff up as good old-fashioned respect
for the truth.  But the fact is, this reconstruction of historical
memory is exactly what historians and anthropologists and others have
gotten good at in recent years.  It is hard work, and it requires a
sustained engagement with the materials.  But it also requires theory,
and it requires a sense of the many ways of getting it wrong, the many
ideologies that can distort one's vision and render one's engagement
with the materials superficial after all.  That is what decades of
critical inquiry into history have bought us, and what the ideologues
want is to throw that all away and return us to the dark era of myths
-- myths such as those of ancient tribal hatreds and modern political
manipulations, separately that is, and not interrelated to one another.

Listen to this quote: "I expect the administration will try to define
this as a victory, but it was a botched war from the very beginning."
It comes from one Kim Holmes, director of foreign policy and defense
studies at the Heritage Foundation (USA Today, 6/4/99, page 6A).  Note
how this person is spinning the war by accusing the administration of
intending to spin the war.  If the public had the slightest knowledge
of how World War II was actually conducted from day to day, then Bill
Clinton's unprecedented air-power-only victory in Kosovo would seem
an astounding chapter in the annals of warfare in comparison.  But of
course the public had no idea how World War II was conducted, and for
the most part they still don't.  Having temporarily lost the battle
against open government, conservatives are using open government as
part of their battle against its proponents, just as they are doing
with all of the tools of resistance to power.  That said, I sure hope
that somebody is conducting a systematic on-the-ground survey of what
the NATO bombs actually hit.  Some of this has been happening.  They
have announced, for example, that they hardly hit any Serbian tanks,
mostly decoys.  But that's just one small part of the larger question.
The answers matter for many reasons, including the moral evaluation of
the war and the ways in which the future political manipulators on all
sides will seek to remember the ancient tribal hatreds of the present.

Let us hope that the war in Yugoslavia will finally spell the death
of European civilization.  When the Serb spokesmen appeared on the
television, they kept repeating over and over, "We are a civilized
country".  They had this look of bewilderment on their faces that
America and Europe would bomb a civilized country.  "We are not some
African tribe", one of them said incredulously.  Here we had these
people organizing a barbaric campaign of mass murder and forced
expulsions against innocent people, and in their minds they were,
above all, civilized.  Where does this idea come from?  The truth is
that it comes from Europe.  The Serbs who said this stuff are normal
Europeans whose minds have been preserved in amber for a hundred
years.  All Europeans used to be like this.  The civilized Belgians
slaughtered people in the Congo, the British in India, and so on.
What if an alliance led by the Italians had bombed Washington in
the 1830s to stop the ethnic cleansing of North Carolina?  In each
case the killers claimed to be civilized and claimed their victims
to be uncivilized, and in each case the claim to be civilized masked
the most uncivilized of all possible realities.  This has a name in
psychology; it is called projection.  The good news is that Europe
and North America have started getting decivilized.  They no longer
pretend that they are good enough to go around slaughtering other
people.  Thirty years ago they turned away when their good friends
in Indonesia slaughtered people on a scale matched only by the Khmer
Rouge.  But even that sort of thing isn't washing any more, and one
could actually feel some confidence that the Australians who recently
waded ashore on East Timor were actually going there with honest and
peaceful intentions.  This is a fragile new reality, to be sure.  It
surely felt odd to be fighting a war in Kosovo with so little of the
jingoism that normally accompanies a war, and with so few persuasive
conspiracy theories about ulterior motives for the war.  It almost --
almost -- felt like we were fighting the war because we morally had
to.  It's too early to be completely certain of that, of course.  We
could still discover an awful lot.  But still, let us be open to the
possibility that global civil society actually now exists, that the
concept of human rights is actually taking hold, and that something
that actually deserves to be called civilization is finally taking
root in the wreckage of this century.

In discussing the response to my recent round of short papers, I left
out a couple of the more striking comments on my paper about John
Commons' view of democracy.  Several people sent me nearly identical
comments on this paper, all lecturing me on the need for democracy to
protect the rights of minorities.  This is odd, given that my paper
was exclusively concerned with a grassroots model of democracy and
only addressed the formal mechanisms of democratic government in the
most glancing way.  Yet in these people's minds, democracy equalled
majoritarian rule and majoritarian rule was obviously a bad thing.
In the cases when this argument was spelled out, it became clear that
it originated in an equation between democracy and mob rule.  This
equation, which dates back to the earliest days of democracy itself,
has been heard in many places and times from aristocracies who are
afraid that the rabble will use democracy as an excuse to take their
stuff away.  Some of the Founding Fathers of the United States said
such things as well, although so far as I know the formulation in
terms of protecting minorities (as opposed simply to protecting the
rich) is a modern innovation.  What struck me is that some of these
people seemed to be unacquainted with the idea that democracy could
mean anything else besides majoritarian mob rule, or that generations
of democratic philosophers had explored the question at great length.
In short, some of these people were unacquainted with any ideas
about democracy except those of the most retrograde ideologies of a
stratified society.  How is this possible in a society in which the
media are supposedly dominated by liberals?

I found another type of response to that paper even more remarkable.
The paper was apparently distributed on a mailing list of people
who are involved in financial cryptography, and many people who are
involved with financial cryptography are literally anarchists who
want to build a financial system that the government cannot touch.
I had remarked in my paper that such people are not concerned about
money laundering, and in fact these people openly said that they did
not regard money laundering as a real crime.  They actually said that
money laundering is not a real crime because it is only a crime against
the government, or else that it is only a crime because the government
says that it is.  What I found so remarkable about this argument is
how poor it is.  It makes perfect sense for a government, especially a
democratic government, to outlaw the hiding of money obtained through
criminal activity.  One can certainly disagree how far the government
can go to detect money laundering -- for example, whether it can open
everyone, innocent or not, to unlimited warrantless inspection of
their finances, or whether it can dictate that financial systems be
designed in a way that facilitates law enforcement.  Those are valid
arguments about the inescapable tension between civil liberties and
law enforcement in a democratic society.  One might argue that money
laundering laws are not very important because the vast majority of
laundered money comes from drug dealing, a problem for which existing
laws are failing drastically and for which alternative laws are nearly
certain to work better.  One might even discover that modern financial
technology makes it impossible to enforce money laundering laws
without a near certainty of much broader invasions, in which case
it would simply be necessary for society to make the unhappy binary
choice that results.  But to argue that money laundering is only a
crime because the government makes it a crime makes no sense.

The draft paper about 1984 and Enemy of the State that I distributed
a while back had some loose ends.  One of these loose ends pertains to
the place of each film in the political cultures of the US and the UK.
I suggested that the political culture of each country was shaped, at
least in its relevant aspects, by its religious history, and I told
a murky story about how this led to different ideas about oppressive
government.  One issue is the extent to which Orwell's antipathy
toward the Catholic church shaped his representation of Big Brother's
regime in 1984.  I've read some literary critics who said that he
was clearly thinking about the church to some degree, but I have not
done enough scholarship to really be making statements on the matter.
I'll do that once I get comments back from the referees at the journal
to which I've submitted the article.  I've been thinking about the
matter in the background, though, and it was in this context that I
was astounded to see a certain photograph in the Irish Times, 9/4/99,
page 8, accompanying a review by Dermot Keogh of a book about Irish
politics and the Spanish Civil War.  The caption on the picture read
"Irish Christian Front meeting, Grand Parade, Cork, on September 22nd,
1936: the crowd holding their hands in the shape of crosses to ward
off the threat of communism".  I wish I could show you this picture;
we tried scanning it but alas it came out a blotchy mass of grey.
It shows an enormous throng of people packing a city square, all of
them holding their arms over their heads with their wrists crossed.
This gesture is very similar to the gesture that the people make in
the opening scene (and elsewhere) in the film version of 1984 during
the Two-Minute Hate.  It's not quite the same, since I believe that
the people in the film are clenching their fists, whereas the people
in the Irish picture have their hands open.  This particular gesture
having been current in Ireland in the mid-1930's, Orwell would surely
have known about it as he was writing the book, perhaps 1946.  What
I haven't checked is whether and how this gesture is described in
the book.  I know that the film was striving for a high degree of
fidelity to the book, but I don't recall the gesture being mentioned,
much less its details, from my reading of the book many years ago.
(I didn't reread the book for my paper on the film because I wanted to
write specifically about the film.)  When I get a chance I will check.
Or perhaps someone has the book handy and is curious.  On some days I
feel like half of the institutions of the West derive from Catholicism
and the other half derive from Protestantism.  Although that is surely
too simple, it is important to trace the various strands of history so
that we can become more aware of the hidden structures of our lives.

A distinctive feature of Internet jargon is its prefixes, and someone
should trace their history.  The first, perhaps, is e-mail, short
for electronic mail, which begat e-commerce, short for electronic
commerce.  Then cyberspace, which then gets generalized into cyber-
everything else.  Lately the prefixes have multiplied.  There's i-,
capitalized or not -- Apple has the iBook and MIT and Microsoft have
announced the I-campus.  In these cases, though, the abbreviated
word is more of a brand name and not an abbreviation; one is supposed
to know that the i- stands for Internet, but nobody is supposed to
say "Internet Book" as the long form of the name.  Lately
has introduced zShops, which are the independent merchants who sell
their wares through the Amazon site.  You have to think about it for
a moment; the z- prefix is presumably derived from the word Amazon,
and is meant to rhyme with and generally remind one of the e- prefix.
It is quite clever, actually, the idea that has the same
status as electronics in general.

What do these prefixes imply?  In each case, they mark out a contrast
between the old/real world and the new/digital world.  In particular
they mark out what dialectical thinkers would call a double relation:
distinct but still related.  E-mail happens in a distinct sphere from
(regular) mail, but is still joined to it by the word mail.  One wants
to call it a metaphor, but that doesn't quite capture it.  (For more
advanced theories of the relationship between the old/real/analog and
new/virtual/digital media, see JoAnne Yates and Wanda J. Orlikowski,
Genres of organizational communication: A structurational approach to
studying communication and media, Academy of Management Review 17(2),
1992: 299-326; and Jay David Bolter and Richard Grusin, Remediation:
Understanding New Media, MIT Press, 1998.)  So there's a tension: it's
new but not completely.  It serves the same function, perhaps, but in
a different medium.  This is reasonable enough in a case like e-mail.
It becomes hazardous, though, when we lose our grasp of the tension,
when we go wild with the "new" part and forget the "old" part.  "New
good, old bad" -- that's our slogan.  But it's not a very good slogan.
So what's going to happen?  As digital technologies become more and
more pervasive, will our language become crammed with these prefixes?

Compare these prefixes to Nicholas Negroponte's informally trademarked
use of the word "being".  "Being digital" is supposed to be a wholly
new existential state -- it's not just the computers but we ourselves
who are making that transition.  As the bits are liberated from the
atoms, the technology starts defining the society and us along with
it.  Having declared the whole digital thing to be boring, Negroponte
is doing something smart: he's moving along from the technology to
the various states of affairs in the big world that the technology
is supposed to bring about.  Each of these is itself a "being", such
as "being rural" for his idea that distributed information technology
will dissolve cities and let everyone live in the countryside.  We
can ask him another time about his position on the universal service
cross-subsidies that have historically made it possible to wire rural
areas for telephone service.  The point here is that each of the new
"beings", such as "being rural", is supposed to be heard as something
downstream from "being digital"; in other words it's "rural because
digital".  Again we have that tension between old and new, as with
e- and the other prefixes.  But where is the analysis of rural life?
For Negroponte it's an idyllic refuge from the city, formerly reserved
to the rich but now available to everyone.  Surely there's more to
it, but we won't know what.  It's a clever device: the concept of
"digital" is not overtly present in "being rural", and yet by a hidden
reference it proposes to govern our thinking.

American culture has grown preoccupied in recent years with the evils
of relativism.  This curse is supposedly a reason to roll back decades
of liberalism and embrace conservative values.  Just for that reason,
we should learn to notice the many ways in which conservative ideology
is itself relativistic.  For the moment I will satisfy myself with one
small example, a quote from a conservative activist historian at Emory
University named Elizabeth Fox-Genovese:

  "... there seems to be a move afoot to transform slaveholding into
  personal guilt instead of membership in a broader social system."

This quote came from an LA Times article (by Sam Fulwood, 6/29/99,
page 1) about some white Southern families who are finally admitting
that they are related to some black Southern families, so that their
family reunions are no longer segregated.  Conservatives denounce this
same relativism when it is applied to different cultures in the modern
world, but relativism is quite necessary if one is going to advocate
a return to older values.  Decent people have repudiated many of those
older values, and someone must negotiate the tension.  Relativism is
part of the answer.  Here, then, is the conversation we never have:

  A: We must uphold the values of the Founding Fathers of our nation!
  B: But the Founding Fathers held slaves.  And even the ones who
     tried to end slavery believed all sorts of awful racist stuff.
  A: You're anti-American.  Besides, you can't impose modern standards
     on another century.  They were just members in a broader social
  B: So why should we uphold their values?
  A: Because their values were permanently truths that are independent
     of history.
  B: You're just picking and choosing, imposing your own values under
     the guise of a return to tradition.  The truth is that American
     history is the history of an ever-expanding commitment to freedom.
     The Founding Fathers set that commitment in motion, and that's
     good, but it has proceeded much further since then and has a long
     way to go.
  A: You're the one who's imposing your own opinions.  It's dangerous
     when people think they know better than tradition, because we
     have no guarantee that the people after us really will know any
     better than tradition.  We must stick with original meanings.
  B: So, uh, what?  You're saying that we should restore slavery?
  A: There you go again, always playing the race card.
  B: ((Makes that scrunching-eyebrows facial expression that you see
     in the comics on the face of Mallard Fillmore.))

Does that mean that conservatives are really relativists?  It's more
complicated than that.  Different strands of conservative thought
are internally contradictory, or they contradict one another.  For
example, one cannot uphold the inherent value of tradition against
a modernizing rationalism that would think things through from first
principles while also insisting on conformance to moral universals.
These two positions are, in fact, direct opposites of one another.
We might fail to notice this if any "traditional" society had ever
existed that ever remotely conformed to those moral universals.  But,
as the Founding Fathers and their slaves make clear, that's not even
close to being the case.

Furthermore, it has often been observed that liberal democracy --
not in the American political sense of the word "liberal" but in
the sense of political individualism -- is necessarily relativistic:
an absolute and thoroughgoing application of any particular set of
moral laws wouldn't be democracy because it would prevent many people
from participating in the democratic process.  The US Supreme Court
under Chief Justice Rehnquist, in fact, has often applied an opposite
version of the same principle: increasing the power of the legislature
by refusing to uphold moral principles in cases where the rights of
certain minorities are at stake.  That conservative majoritarianism,
for example, is why the Bill of Rights keeps shrinking under a court
whose philosophy is supposedly devoted to getting government off our
backs.  Conservative activists and politicians smeared Lani Guinier
for exploring the limits to majoritarianism in defense of certain
minorities who do not tend to vote for conservatives.  Conservative
judges and legal theorists have been revolutionizing American law by
means of a legal theory based on economic analysis that explicitly
eschews any conception of morality.  The public relations practiced
by the organizational supporters and proponents of conservatism is
an amoral profession that distorts reality in a mechanical fashion for
whoever happens to be paying.  In fact, if you judge by their actual
behavior, including their silence when this sort of stuff is going on,
conservatives are interested in getting government off their own backs
-- and putting it on everyone else's.  They are relativists when it
suits them, and when it suits them they use the label of relativism
as a stick to hit their opponents.  If liberals dominated the media
then you would hear about this at least once a week.  But you don't.

For some time now I have been predicting the return of authoritarian
culture.  I see signs of this everywhere.  Since I have been tracing
some of the rhetorical devices by which rationality is being eroded
in favor of more primitive modes of thought, let me mention one more.
I received a message recently from someone who was excited about the
following article: .
It is entitled "Just do what the pilot tells you", and its ostensible
purpose is to refute those people who make a general point of defying
authority.  The person who was excited by the article was a mother
who was preoccupied with getting her teenage son to obey her commands.
Some people just have a driving need to get other people to obey their
commands, and I guess that she was one of them.  She wanted to be an
authority, and she saw this article as a rhetorical resource in that

On one level, the article would seem hard to disagree with.  Surely
we can find one single situation, however hypothetical, in which it
is a good idea to obey the commands of an authority.  Yet one remains
uncomfortable.  Surely this person has not taken the trouble to write
this article just to establish a trivial point.  Of course: the real
point of the article is to put a wedge into our minds, to cast doubt
on the automatic respect that we accord to people who defy authority,
and to pave the way for establishing a stereotype toward the opposite
extreme.  How we interpret the article thus depends on what rules we
think the author is playing by.  If the author is playing by rational
rules then the conclusion of the article must probably  be granted
even as its purpose remains mysterious.  But if the author is playing
by the associationistic rules of public relations, preparing a new
round of programming for the lizard brains of those people who are too
busy with the serious business of their lives to seek out the slippery
slope, then the article is insidious: not simply a trick, but a highly
routinized sort of trick, part of a technology.  I don't know exactly
what neurolinguistic programming is, but surely it is something quite
similar to this.  Because the article does admit some kind of rational
interpretation, albeit a trivial one, the author retains a level of
deniability.  But I don't buy it.

You may be saying, c'mon, surely there exist people who defy authority
to a pathological degree.  And I would agree.  I used to work with
someone whose whole modus operandi was to divide people.  She posed as
a political radical, and indeed she expended quite stupendous energy
organizing people around her view of the world.  But in the day-to-day
affairs of the environment around her, she was seriously destructive.
You could catch her saying things that were false, or that did not
make any sense, setting up polarized conflicts around issues that
could have been settled in a rational way, and when you caught her
she would go into a whining fit that was embarrassing to be around.
In her mind she was fighting oppression, and to some small degree some
oppression might have been going on, but mostly what she was doing was
projecting some pattern that she picked up from some other (presumably
quite traumatic) situation.

So such people really do exist, just as there do exist people who
really are whining victims.  The danger comes when the existence
of those people is inflated into a stereotype through the selective
application of loose and vague words, for example identifying people
as "victims" with an ugly tone of voice without a proper assessment
of the validity of their complaint.  Try this: find a longstanding
conflict between two parties, choose one of the warring parties at
random, listen to them tell their story, turn to the other party,
put a disgusted expression on your face, and sneer "whatever happened
to personal responsibility?!".  It works every time.  It feels great,
like you're righteously upholding moral values against the scum of
the earth.  The truth, of course, is that you're just getting a power
trip out of arbitrarily whacking people.  Real examples: after smokers
recently won lawsuits against tobacco companies that had lied for
decades about the dangers of smoking, did conservative opinion leaders
howl about the abdication of personal responsibility on the part of
the lying tobacco executives?  No, they howled against the abdication
of personal responsibility on the part of the people who had been lied
to.  Go figure.  Or take the recent legislation to protect computer
companies against lawsuits for damage caused by their Y2K-defective
products.  Did Congress laugh at this transparent attempt to escape
personal responsibility?  Quite the contrary.  Morality is real and
important, but this sort of abusive moralizing gives morality a bad
name.  The danger is that this kind of selective overgeneralization
will now be applied to people who defy authority.  The sixties did
produce a good number of polarizing personalities, and the cultural
static they caused now provides an opening for the authoritarian
personalities to step forward.  These people are dangerous not only
because of their beliefs, but also because of their mastery of the
PR techniques of associationism, which facilitate obfuscation and
stereotyping on a vast scale.

It is easy to design something that looks impressive on a computer.
What's hard is to design something that is connected to the reality of
the people who use it.  Take the example of Web pages, in particular
the home pages of companies and universities.  When I look up a Web
page for a company or university, at least half the time I am looking
for an address or phone number.  But far too many Web sites omit
that basic information, or else they bury it in some random location
that you have to drill though a dozen pages to find.  These sites are
designed by people who tacitly believe that the Internet is a zone
of reality wholly unto itself, and they are not designing from the
standpoint of the Internet as one part of a larger ecology of media.
University Web sites often exhibit a further problem, which is that
they fail to link to the home pages of individual faculty members.
Often these sites are designed by the public relations people, who
try to make them self-contained.  Those PR-designed Web sites usually
look okay, but they too often exhibit an almost creepy disregard of
the questions that any real person wants answered.  Those questions
are usually answered on the home pages that the faculty maintain
for themselves, yet somehow in year seven (or whatever it is) of
the Web era, we still have no uniform mechanism for searching for an
individual's home page given only their name.  The commercial search
engines don't remotely suffice for this purpose.  It's frustrating.
Last complaint: too many US companies do not list phone numbers that
can be called outside the United States.  They just list 800 numbers.

Some URL's.

The Learning Marketspace
(belligerent proponents of technology in higher education)

Research on the Effectiveness of Distance Learning in Higher Education

Career Outcomes of English PhD's
and click on the link that promises PDF for Communicator Online

tools for world wide dialup

Symposium on Narrative Intelligence

Critiques of Libertarianism

Latest Management Research and Practice

Internet video broadcasts of civic meetings

Encyclopedia Britannica online for free