Some notes about Y2K, new conservative rhetoric, the wired university,
and the limits of automation, plus movie recommendations, follow-ups,
and URL's.

**

You probably got a message recently asking if your address on RRE was
up to date.  If you followed the instructions to change your address,
please let me know if you get duplicates of this message.  We had
list-server hassles as you'd expect, and I'm not certain that all of
the changes went through.

**

Stop the presses!!  I have just received this bulletin from a reader:

  I read your cheap pens piece.  I thought you'd like to know that
  there's a store on the corner of Embarcadero Road & El Camino Real
  in Palo Alto in the Town and Country Shopping Center (this is like 2
  long blocks from Stanford U) called MAI-DO.  It has an amazing number
  of Japanese Pilot pens for sale including more flavors of the Pilot
  Hi-Tec-C than one could imagine.  They're about $2 a piece.  I like
  the 0.3 mm the best, in blue black.  They also have lots of notebooks
  and Japanese organizers and B5 and A4 paper and notebooks.  The store
  is across the street from Palo Alto high school, if you're familiar
  with PA.

It is a privilege to be able to share this with you.

Speaking of which, if you want to get one of my remaining excellent
cheap pens, this is your last and final chance.  Just send a check
for US$20, made out to Amnesty International, to me at this address:
Dept of Information Studies; UCLA; Los Angeles, CA 90095-1520; USA.
The list of probably-available pens can be found at:

  http://dlis.gseis.ucla.edu/people/pagre/pens.html#available

For more on Amnesty International's new campaign against torture:

  http://www.amnesty.org/

**

I still haven't noticed any five-star international conferences
to assess the accuracy of the many predictions for the year 2000.
Have you?  I didn't think so.  We'll have to carry on by ourselves.
We'll pick up with an article in the 6/5/00 Wall Street Journal:

  Twenty years ago, Al Gore, then a 32-year-old Tennessee congressman,
  persuaded 200 House subcommittee chairmen to predict what their
  panels would focus on by the end of the millennium.

Most of the predictions, as you might imagine, bear no relationship to
reality.  Blimps.  Force field weapons.  The extinction of railroads.
No new discoveries in pharmaceuticals.  Teleconferencing replacing
some Congressional hearings.  Anti-satellite weapons knocking out
telecommunications systems.  Vertical take-off airplanes in commercial
aviation.  Electrical power beamed from space.  The paperless office.
They overestimated social changes due to technology and underestimated
social changes due to other factors.  They missed the fall of the
Soviet Union altogether.  They thought that HMO's would be a good
thing, and they especially celebrated the end of interest-rate limits
for savings and loans, which led to catastrophe a few years later.

Some predictions were more accurate, such as challenges to the safety
of genetically engineered products.  One big winner in the prediction
contest was Al Gore.  He correctly predicted the cloning of cows
and the prominence of global warming.  He didn't talk about computer
networks in this document, though he had discussed them elsewhere
around the same time.  The section written by the telecommunications
subcommittee did mention the issue in general terms, but it predicted
that France would achieve its goal of world leadership through Minitel.

There you have it.

**

Reporting Y2K.

Before the millennium year leaves us, we must stop and consider a
remarkable column that appeared on the op-ed page of the 1/4/00 Wall
Street Journal.  The author was the befuddling Peter Huber, and the
subject was the failure of the Y2K bug to shut down the world on New
Year's Eve.  Huber argues that the predictions of Y2K catastrophe were
simply another in a long line of anti-technology scare stories, and
he is particularly incensed by the role of television newscast anchors.
He opens with this:

  Call it the anchorman law of technological catastrophe: If Peter
  Jennings can see it coming, then it isn't.

Then, a paragraph later:

  The anchorman told us just what was going to happen to our lights
  and phones, and when.  And, right on schedule, it didn't.

This is a serious charge.  Did Peter Jennings really predict that
our lights and phones would fail on New Year's Eve?  Huber does
not provide us with Jennings' words, or with any close paraphrase of
them, or with the date on which he supposedly said them.  One wonders:
did Jennings really issue predictions, or did he simply, in his
role as a journalist, quote the predictions of prominent authorities?
We are puzzled.  But rather than help us out, Huber commences a rant.
He offers the following generalization:

  The anchorman ... has to explain a snafu's cause and likely effect in
  two minutes flat.  But anything that's real and explainable in so few
  words is fixable by 10,000 biochemists, engineers or programmers at
  Merck, Monsanto, or Microsoft.

This is one of those lines of reasoning that has a surface plausibility
until you think about it for one second.  Why should this conclusion
follow at all?  If the sun suddenly started expanding by a million miles
an hour -- not at all implausible, given how little we know about the
sun -- that would be easy to explain and impossible for any number of
biochemists, engineers or programmers to do anything about.  We would
hear about it on the news, and then we would die.  And is it really
so certain that 10,000 geniuses will be enough to stop a pandemic of
drug-resistant tuberculosis?  Huber's writing is full of such reasoning.

Having said this, Huber launches into a nasty ad hominem attack on news
anchors, who he imagines to be motivated by envy toward technologists.
The attack then expands to other critics of technological society.  The
market, with its powers of fixing and planning, is said to solve all
problems before anchormen even hear about them.  Then comes the punch:

  In his own defense, the anchorman may insist that he just reports
  what others foretell, but he's being too modest.  He picks from a
  vast and varied catalog of predicted disaster, and makes clear what
  most professional oracles take great pains to muddy.

This is so twisted that it takes real work to understand it.  You will
recall that we were wondering whether Peter Jennings himself predicted
the failure of lights and phones, or whether he was simply reporting
the views of prominent authorities.  This passage appears to provide
both answers at the same time.  By choosing to report the story at
all, Huber suggests, Jennings is ipso facto endorsing the predictions.
The clear implication was that Jennings should not have reported the
predictions of, say, the Republican Congressional committees that did
predict a Y2K catastrophe -- you will recall that it was going to be
Al Gore's fault.  He should have ignored the story.

The larger implication here is striking: there is no reality, Huber is
saying, no truth about which controversies are sufficiently prominent
to be worth reporting.  It's all a jumble of fragments, with no basis
to choose among them except the pure arbitrary will of the anchorman.
In particular, Huber is implying that it was not newsworthy that
a recently discovered and globally pervasive computer bug, one that
took an unknown but very large number of forms, would cause industrial
society to shut down overnight unless a massive effort were undertaken
to repair unknown thousands of programs, many of which could not
be recompiled.  So absolutely certain was the planning capacity
of the market, he implies, that the populace should not even have
been informed that the very survival of civilization had been staked
on it, with bets coming due on January 1st.  This is an extravagant
claim, and it is hard to imagine what it is doing in the newspaper.

Huber's article exemplifies a disturbing rhetorical pattern that to
my knowledge is new.  The pattern goes like this: (1) open your column
with a weak or distorted version of your opponent's position; (2) then
engage in ad hominem ranting and sophistry, thus creating an emotional
environment of rage and disdain; (3) three-quarters of the way along,
say "opponent might complain that his real argument is ..."; (4) take
advantage of the emotional environment of rage and disdain to brush
off the real argument with illogic; (5) now resume the polemics as
if nothing had happened; and (6) conclude however you want, unmoored
by reason.  I've seen this a lot lately, and cannot remember seeing
it  before.  It is very much as though the professional publicists of
the new jargon are constantly reading one another's work and copying
whatever rhetorical innovations they see there, big or small, whether
they make sense or not.  These publicists are numerous, and intelligent
in their way, and the collective force of their bad faith is considerable.

**

The hidden curse of projection.

Projection is insidious.  Let us consider two analogous examples of
the projection that is hidden in the currently fashionable jargon.

(1) The phrase "politically correct" is a work of genius.  Having
been salvaged from "their" rhetoric, it can be filled with all
sorts of extra meanings while still claiming to plumb the depths of
"their" real thinking.  In everyday invective, the phrase "political
correctness" is used in two distinct ways:

  "Politically correct", version A.  Every political, social, or
  cultural idea that is not conservative is labeled "politically
  correct".

  "Politically correct", version B.  The use of force, intimidation,
  loud voices, protest tactics, or moral indignation to impose
  nonconservative ideas or suppress conservative ones is labeled
  "politically correct".

Notice what happens when the phrase "politically correct" is used
in both of these versions without any clear distinction being drawn
between them: non-conservative ideas begin to begin to seem oppressive,
just for not being conservative.  The idea is that every idea that is
not conservative is ipso facto something that is crammed down people's
throats, something artificial and imposed, a divergence from the given
order of things.  Why is this an example of projection?  Look at what's
implied when non-conservative ideas, as such, are associated with
intimidation and repression.  Because intimidation and repression are
illegitimate in a free society, non-conservative ideas are associated
with attacks on freedom, and are thus labeled as having no legitimate
place in a free society.  So people who use the phrase "politically
correct" according to the current fashion are trying to delegitimate
non-conservative ideas under the guise of accusing non-conservatives
of delegitimating their own ideas.  That's projection.

The phrase "liberal media" is also used in two distinct ways:

  "Liberal media", version A.  Journalistic institutions that abandon
  the norms of objectivity to favor liberals in a systematic way are
  said to be the "liberal media".

  "Liberal media", version B.  Anything that appears in the media that
  does not conform to the conservative party line of the day is said
  to be evidence of the "liberal media".

Notice what happens when the phrase "liberal media" is used in both of
these versions without any clear distinction being drawn between them:
the very existence of any single non-conservative idea in the media,
or even a single phrase that does not convey a conservative spin, is
made to serve as evidence that the media as a whole are biased against
conservatives.  In other words, "liberal media" can mean "the media as
a whole are liberal" or "those parts of the media that are liberal",
and the phrase is used ambiguously to blur the difference between
these two ideas.  So even when hundreds of conservative pundits appear
in the media daily, the slightest divergence from the conservative
party line is still evidence of the "liberal media".  This systematic
ambiguity between different uses of the phrase "liberal media" also
makes it possible to argue for the existence of "the liberal media"
just by gathering lists of every non-conservative phrase that appears
in the media, regardless of the number of conservative phrases that
might also have appeared.

Of course, the PR message about "liberal media bias" takes more forms
than this, and those other forms would make equally good objects of
investigation.  I am not claiming to disprove the (patently absurd)
claims that the media exhibit a liberal bias.  My topic here is this
one particular ambiguity and its consequences.  And its consequences
are another example of projection: any nonconservative idea in the
media is ipso facto portrayed as an example of "liberal media bias",
and therefore as illegitimate.  Under the guise of pretending that
the media eliminate conservative views, the jargon actually promotes
the elimination of nonconservative views.  This is not just theory,
either.  Any media outlet that runs a liberal cartoon or columnist
can expect to receive angry letters protesting the "blatantly liberal
views" that it has allowed to appear, as if devoting space to any
nonconservative at all were ipso facto illegitimate.  This is not a
way that rational and decent people think, but it is altogether common
right now.

Projection is an integral component of every example of aggression.
When one country invades another, for example, it almost invariably
stages an attack against itself by the country it wishes to invade.
When men who batter their wives are compelled into therapy, those
few who ever become capable of explaining their feelings explain
that they felt, throughout their attacks, that in reality their wives
were attacking *them*.  The new jargon is not the moral equivalent
of physical violence, of course, but it exhibits the same structure:
attacks on "them" disguised as accusations that "they" are attacking
"us".  The more that the jargon develops -- the more rhetorical
devices it acquires for engaging in projection -- the more primitive
does the aggression become.  Unless this irrationality is brought into
the light and shown for what it is, it will only get worse.  People
who can be sent foaming at the mouth against imagined enemies are
dangerous to everyone except the tiny elite who get to imagine who
the enemies are.

**

Realistic optimism for the wired university.

In the wars over information technology in the university, I am
a neutral.  I am neither an enthusiast nor a critic but a realist.
Realists have it hard: they don't have an easy rhetoric they can use,
and they don't fit into the conventional "pro versus con" story frame
within which these disputes are narrated.  I know people in both
camps, though I admit that I find the extremists in the enthusiasts'
camp much more insufferable than the extremists in the critics' camp.

In talking to both camps, I have noticed a pattern.  Many people on
both sides imagine themselves to be a small and embattled minority
pushing up against the inertia of established institutions.  The
enthusiasts, many of them, are individual faculty and researchers who
are depressed at the difficulty of persuading their institutions to
support large-scale initiatives in this area, and at their colleagues
who remain focused on their individual research topics and not on the
urgent work of revolutionizing the institution to take advantage of
the technology.  The critics, many of them, are likewise individual
faculty and researchers who see university administrations acting
like corporations and entering into partnerships with corporations
to create commercialized cyberuniversities with no regard for the
faculty, or for what education really means.  Although these views
seem like opposites, they come remarkably close to both being right.
I want to transcend what they have in common -- a sense of futility
that derives from an insufficiency of imagination.

Not everyone fits these two patterns, of course.  Some universities
do have technology enthusiasts who are running significant programs
online, for example degree programs that have students in Singapore.
And a remarkable number of critically minded people have had a hand
in shaping either the technology or their own institutions' use of
it.  Andrew Feenberg of San Diego State is an example; he did some
the first, if not the very first, experiments with online teaching
almost twenty years ago.  Mike Cole at UC San Diego has been running
classes at multiple UC campuses over video links.  There are others.
These people are not anti-technology; that is not what "critical"
means to them.  Rather, they want to ensure that the technology is
used in a way that fits with serious ideas about education, so that
the technology itself does not drive educational theory or practice.

Although I am friends with many people in this latter camp, my work
does not fit into any camp.  I do often use technology in interesting
ways in my classes, but I am not trying to change the world by doing
so.  Instead, my work in this area is mainly analytical and normative.
I want to sketch a structure of ideas from which we might work in
reinventing the university in the wired world.  I am not trying
to shaping technology in a direct way; rather, I want to shape
imagination -- imagination not just about technology, but about the
larger unit of analysis that includes both the technology itself and
the institutions within which it is embedded.

My work is also distinct from the valuable community that conducts
research on organizational informatics -- the institutional dynamics,
largely cognitive and political in nature, that affect how information
technology gets used in particular organizational contexts.  These
people, particularly the school that came out of UC Irvine, focus
squarely on the political processes that shape information technology:
office politics, for example, or the politics that are shaping the
development of online publishing, as in Rob Kling's current work at
Indiana.  Such work is thoroughly needed, but it's not what I'm doing.
I'm focused on prescription and imagination -- not "how is it done?",
but "how *should* it be done?".  We often think of imagination as an
escape from reality, but that's not what I mean.  I want to develop
a realistic imagination, one that is informed by the real dynamics
of institutions, by the real grindings of power politics.  I want to
*intervene* in these politics, providing the raw imaginative material
that will be needed by anyone who is trying to set things straight.

My critically-minded friends sometimes call me an "optimist".  They
do not mean this in a good way, and not because they are pessimists.
Rather, an optimist, for them, is someone who does not understand the
politics of the situation.  Engineers are trained to focus in a narrow
way on the technology, and so they are often blindsided by political
factors that they don't understand.  Think, for example, of the huge
project in California to build a new computer system to track deadbeat
parents and make them pay child support.  That project crashed and
burned for political reasons: extracting child support was the turf of
local authorities, and each local authority had its own way of doing
things.  Nobody had (or exerted) the power to compel the various local
authorities to do things in a standardized way, and so the development
project was torn apart as every local authority demanded that its own
procedures be supported.  (The same thing happens in the private sector
every day.  We just don't read about it as often.)  The engineers who
ran that project were doubtlessly competent in a technical sense, but
their competence silently presupposed that someone else, the client,
had the wherewithal to impose the unified political order that the
unified technical order required.  An optimist, for the critics, is
someone who goes around imagining new technologies -- and thus new
institutional forms -- oblivious to the political realities that will
shape any real technology in practice.

So am I an optimist in that sense?  No, I don't think so.  To the
contrary, I offer scenarios that ought to work just *because* of the
institutional forces.  I see design as judo, a matter of catalyzing a
change by using technology and relatively minor institutional changes
to clear a path for the existing forces to head off in new directions.
That is the purpose of my proposals, and the test of them.  I do agree
with those who say that the university must change or die.  But not
all "changes" are equally good.  Many are bad, and many others will
never happen.  I want to identify the good changes that can actually
happen, and I use institutional analysis to that end.

**

Why the Web isn't just a fashion.

The last time I spoke on the topic of the wired university, someone in
the audience told me that the Web is just a fashion, that young people
have grown bored with it, and that wild-eyed tales of the impact that
technology will have on the university are irresponsible and should be
discarded.  This was by no means the majority view in this particular
audience, but I was surprised to hear it at all.  I had imagined such
arguments to be extinct, and I was surprised at how poorly rehearsed
my answer was.

So what *is* the answer?  One can start with the obvious part: the
technology has been improving by a factor of 100 every ten years, that
being a rough equivalent of Moore's Law, and we know enough about the
technologies in the lab that we can be confident that this improvement
will continue for at least thirty years.  That's a factor of 1,000,000
improvement, enough so that we can imagine having computing devices in
the paint on the wall if we can think of a reason why we'd want that
(see Hal Abelson et al, Amorphous computing, Communications of the ACM
43(5), 2000, pages 74-83).

But that's not a very good answer.  What if automobile tires became
cheaper by a factor of a million?  We'd get our tires replaced more
often, and cars would be 2% cheaper, and we'd probably have to pass
environmental laws for all the people who would burn tires for fuel.
But otherwise life would not change a great deal.  Automobile tires
are only useful in conjunction with complementary goods, such as
automobiles, that are not getting cheaper at anything like the same
rate.  (Some pundits have blamed health and safety regulations for
the failure of cars to drop in price as quickly as semiconductors.
Really.)  The same thing happens with computers in many contexts, and
I would guess that the processor chip is no longer a large percentage
of the total cost of a calculator.  Lots of people would be happy to
have a ten-times-faster processor to display Web pages more quickly,
but there's a limit to what most of them are willing to pay for it.
So it's not simply a matter of the price of the hardware coming down,
even though many people seem to regard that as an adequate argument.

A better argument starts from ubiquity.  The chips will be so cheap
that we can expect them to be embedded in everything, from cars to
appliances to furniture.  People who scoff at computers because some
of the cool kids don't surf the Web can be quieted by reminding them
that everyone in an industrial society is a "computer user" any time
they drive, cook, bank, or (in most cases) work.  But the embedding
argument isn't good enough either.  So what if computers are around
all the time?  Lots of things are around all the time.

To really answer the question, we have to talk about the ways that
people are bound into institutions.  Institutions shape identities;
they create incentives; they structure relationships; they define the
terrain over which we live out the major strategies and tactics of our
lives.  An institution is like a grammar; it doesn't tell you what to
say, but it provides the raw material for saying, and it thereby sets
rules that are hard to transcend because everyone else is expecting
them.  A computer system is also like a grammar: it defines a grammar
of the actions that its users are allowed to take.  One designs a
computer by articulating the grammar of an institution and inscribing
it into the software.  Institutions lend themselves to computerized
augmentation, and computers, which follow rules much more rigidly than
any bureaucracy, help inscribe institutions more deeply in our lives.
Information technology is radical and conservative at the same time:
it is radical because of its exponential quantitative growth, but it
is conservative because its whole mode d'emploi reproduces or even
rigidifies institutional orders.  That's a broad generalization, of
course.  The forces amplified by computing can tear an institution
apart, or enable institutions to invade the turf of their competitors.
Every institutional order is a dynamic equilibrium whose persistence
is entirely contingent; nothing is given in the institutional world.
But amongst all the change, the relationship between computers and
institutions is pretty much the same.

The picture of cool teens walking away from the out-of-fashion Web is
misleading because it presents information technology as a contingent
choice, as something that one can take or leave, as a choice separate
from all other choices.  If you look at technology just as technology
then that's how it's likely to look.  But if you look at technology as
part of the social world, and the social world as a sprawling complex
of institutions that bind us together in historically specific ways,
then it's all too clear why radically improved information technology
will change the world: because the world is already changing itself
all the time, and because the world created information technology
precisely to amplify that existing logic of change.

**

You have probably guessed that my message comparing conservatives
and liberals on the subject of personal responsibility drew some
flak, and if so then you guessed right.  What most struck me about
these flak messages is their authors' profound lack of acquaintance
with liberal arguments.  They *thought* they were acquainted with
liberal arguments, of course, but they were really acquainted with
conservative caricatures of liberal arguments.  The most common
response, to my surprise, was that personal responsibility extends
only to the condition of oneself and one's family.  (None of them
explained why it extended so far as to include one's family.)  Most
of them opposed this to what they called "social responsibility",
which they implied did not exist, or treated as nebulous and more
or less optional, or explicitly equated with communism.

I thought of myself as having been hardened to the excesses of the
current jargon, but I wasn't prepared for this kind of in-your-face
selfishness.  I was expecting a different argument, that personal
responsibility consists in responsibility for one's own actions.
But no: these authors believed that one is responsible (only) for
one's own *condition*.  Nobody else's condition matters, it seems,
and only after some thrashing did I manage to extract any admission
that one is responsible for other people's conditions to the extent
that one's own actions have affected them.  I had prepared myself for
the inevitable moment, only a couple of steps of logic along, where
my own views would diverge from those of the liberals I wrote about.
But that never happened.  The people who wrote to me were unaware of,
and uninterested in, the idea that we all become responsible for one
another's lives to a certain degree because of the roles we play in
various institutions, and that we come to occupy those roles through
our socialization and acculturation, much of which is unconscious or
habitual, and because these institutions organize life in our society
so completely that our only alternative to playing our roles would be
to live in a shack in the woods.  It follows that, even if one is only
responsible for one's own actions (and not, say, for the systems that
one unfairly benefits from), then one is responsible for the condition
of all of the people who suffer from the institutions that we uphold,
collaborate with, help reproduce, and so on.  But the argument never
got to that point.  Some of our fellow citizens are pretty far gone,
I'd have to say, and we need to see that they don't take us with them.

**

My argument in "The market logic of information" turns on a subtle
point about the relationship between transaction costs and organizing
costs, and I gather that I did not explain my views on the matter
very clearly.  The issue arises, you will recall, because of Ronald
Coase's theory that the boundaries of capitalist firms are decided by
the cost of using market mechanisms (transaction costs) in comparison
to the costs of organizing productive activities by command and
control within the boundaries of a firm (organizing costs).  Fans of
capitalism often claim that the Internet will being about a closer,
indeed a perfect, approximation to Adam Smith's idealized market by
reducing transaction costs.  The theory, however, makes it just as
likely that the Internet will increase the role of command and control
by lowering organizing costs even faster than it lowers transaction
costs.  So things start to seem less determinate.

The question, one might suppose, is which set of costs is dropping
faster.  Some people seem certain that it is transaction costs that
are dropping faster, and they can certainly point to cases where this
is true.  Fortunately, we do not need to decide the matter on that
kind of macro level.  As a broad generalization, the Internet lowers
transaction costs in a wide variety of ways, each relatively small
in proportion to the overall cost of doing business, but it lowers
organizing costs in a single big way, mainly by increasing economies 
of scale in cases where analogous activities are coordinated from a
center.  No major controversy is required, therefore, about which set
of costs is dropping faster.  Lower transaction costs, other things
being equal, break companies into smaller pieces.  Lower organizing
costs through economies of scale, other things being equal, merge
companies into larger, homogeneous units.  And together, these forces
push toward the same endpoint: focused monopolies.  Every firm spins
off the activities that it doesn't do best, and it can do so because
of reduced transaction costs, as well as because of the economies of
scale that the outsourcing firm itself is enjoying.  Then every firm
combines itself with other firms that also do what it does best, and
it can do so because of reduced diseconomies of scale in organizing.
That is the argument.

**

Getting beyond automation.

I got a call one day from a reporter who was writing about a conflict
in Texas over software, marketed by Nolo Press if I recall correctly,
that helps people with their legal problems.  The Texas Bar wanted
to restrict this software, and the reporter told me that every single
expert on computers that he had consulted had laughed and jeered and
was certain that the lawyers had taken their position out of a pure,
crass self-interest in stifling competition.  I was surprised, though
I shouldn't have been, because it wasn't so obvious to me.  Some areas
of law are more settled and clear-cut than others, and in those areas
it would be irresponsible to suggest to someone that they can consult
some kind of intelligent program instead of talking to a lawyer.  I
would be uninclined to ban the software simply on free speech grounds,
even though I am not happy either, not entirely, with the idea that
software is speech.  But I wasn't overwhelmed by the simple theory
about stifling competition.  (Here is the citation for the reporter's
article: Greg Miller, A turf war of professionals vs software, Los
Angeles Times, 21 October 1998, page A1.)

This small experience set me to thinking about the attitudes of
technology enthusiasts.  Have you ever noticed that some enthusiasts
for automation seem to hate the people whose work they want to do
away with?  It's necessary, perhaps, to protect one's own feelings
by spinning a caricature of the threatened parties as bone-headed
and backwards and reactionary and selfish and static, and to laugh
at them for being caught flat-footed in the crosshairs, and to
interpret their every word and act in this light.  As a professor
I get this a lot, like the time a different reporter, for a local
business magazine, called me and ranted about professors in this
manner under the guise of asking questions.  The fact that professors
have conducted endless thousands of experiments with new technology
in their classrooms did not compute for this reporter, who went ahead
and wrote whatever propaganda she had in her head when she dialed my
number.  "Academic elites resist change" was the special of the day.
This kind of hatred is a real conversation-stopper, and I suspect
that at some level that's its purpose: who is going to listen to
someone whose every word is filtered through that kind of stereotype?

Now, I realize that many people who embrace technology do not hate the
people whose jobs the technology might affect.  But, I would propose,
those are often the people who *don't* think in terms of automation
-- that is, removing people from the system and installing computers
instead -- and who think, instead, in terms of technology as tools for
people to use.  It is true that the real efficiencies from information
technology come when the whole institution is restructured in some
way.  But that, very often, is just what the automation view misses.
If you're trying to replace people with machines, then you are not
in a mood to reorganize people and their relationships, and you are
definitely not in a mood to work with those people to learn all of
the things that they know, and that you don't know, about how the work
really gets done.  The neutron bomb method of design has been tried,
and (with a few exceptions) everything that can be done that way was
done a long time ago.  The lawyers in Texas should be welcoming legal
software as a way to help people understand what their questions are,
thus bringing them to the doors of lawyers.  But the designers of
legal software should be thinking about their products in the same
way.  The same goes for the role of computers in many other areas.
When technology enthusiasts go around hating people, let's call them
on it.  Because hatred is bad for the world, and it's bad for design.

**

Movies.  Some readers asked me why I recommended Spike Lee's new film,
"Bamboozled", given that the critics were decidedly mixed.  I'll admit
that as drama it's flat, and that Lee didn't pull off the goals he set
for himself.  Forget about that.  The movie would be worth the price
just for its collection of film clips and artefacts from the blackface
era.  It also succeeds in showing what's so offensive about blackface
and the representations that went with it.  Some critics, I gather,
complained that the movie aims at a nonexistent target, given that
nobody in real life is putting on any blackface shows.  It's hard to
believe that an actual film critic would say that.  It's a satire, and
satire by definition communicates a message by extrapolating.  Spike
Lee believes that a lot of current mass-cultural representations of
black people are similar in spirit to blackface, and he's arguing that
something as offensive as blackface is not as far below the surface as
we would like to think.  But that's too intellectual: the real point
of the movie is the visceral impact of the symbolism.  Indeed, I don't
think you get the point of the movie is you just perform some kind of
intellectual calculation of how offensive the imagery is, or if you
just engage in obligatory displays of outrage, etc.  The point, if you
are white, is to catch a glimpse, however momentary, of what it's like
to be represented in that way.

One movie that I *don't* recommend is "The Contender".  I mention it
because it's relevant to my articles about the presidential campaign.
In it, a woman senator who has been nominated by replace a deceased
vice president is smeared with allegations about her past sexual
life.  It is a truly terrible movie.  Here's what it's like: if Sidney
Sheldon were a Hollywood liberal, this is the sort of movie he would
make.  The melodrama is crude, the motivations are thin, the sets are
cramped, and nobody who knows the first thing about politics could
find any of it remotely realistic.  It's a cartoon.  Yet critics take
it seriously.  Why?  Well, if you can ignore how dreadful it is as
a movie then you can ask about the issues it addresses, questions it
raises, stances it takes, and so on.  It's about double standards and
all that.  Fine.  But my view is that you can't separate its awfulness
as a movie from its political message.  Why?  Because you can't engage
in political analysis without a realistic understanding of concrete
political processes, and this movie is clueless in that department.

First of all, the movie presents the attack on the woman senator as
the work of a single representative, clearly meant to be Henry Hyde.
In the real world, however, character assassination is organized
through sprawling, loose-knit networks of activists.  The Henry Hyde
figure would have a staff, and that staff would be in contact with
the staffs of think tanks and lower-visibility sorts of issue-advocacy
PR organizations.  The movie shows the villainous Hyde figure making
comically high-risk moves that would never happen in real life, and it
does not show the effects of the echo chamber of pundits.  It imagines
that the attack on the woman senator would be motivated purely by
her gender, and not by ideology, and it shows the senator and others
getting away with ideological views that bear no relationship to
anything that would be practicable in the political environment that
the movie claims to critique.  Finally, the movie does not show the
role of a chosen and sustained positioning of the victim, such as the
business about "Gore the exaggerator", which was picked and promoted
very consciously by the RNC over a long period.  Liberals who lack
all artistic taste may get a warm feeling from this movie, but they
will not be informed.  Indeed, this is the first movie I can remember
that made me take the side of the misogynistic right-wing Congressman.
I'm guessing that we won't have a lot more movies like that real soon.

If you want to see an exceptionally well-made movie, go see "Almost
Famous".  It probably won't be in theaters for much longer.  Set in
1973, it concerns a boy who gets an assignment from Rolling Stone
to follow a mid-level rock band, and the groupie who he meets there.
The director, Cameron Crowe, took tremendous care with every aspect
of it.  The period sets are terrific in a subtle way, and the whole
movie maintains a perfect pitch.  Philip Seymour Hoffman, who plays
Lester Bangs, is the voice of the director as he explains amid heavy
sighing and self-conscious uncoolness why it is so hard for rock and
roll to keep its ideals of truthfulness in a corporate world.  The
movie resolutely eschews 60's and 70's stereotypes, going easy on the
bell bottoms and psychedelic imagery and all that.  Instead, it's a
calm story about genuine people in an crazy environment.

**

Like any editor, I often wonder who is reading this mailing list.
I know who's subscribing, but that's a lot more and probably a lot
less than the actual readership.  Mostly I don't know.  But from time
to time I get another scrap of evidence.  Compare my message about
"campaign lunacy" of 10/9/00 to quotes from Gore campaign people that
appeared in the newspaper over the next two days:

  [George W. Bush], his staff, and most of the media are engaged in a
  campaign of character assassination.

  RRE, 10/9/00

  ... the Republican criticism of Mr. Gore's misstatements, which [Joe
  Andrew, the national chairman of the Democratic National Committee]
  described as "character assassination".

  NY Times, 10/10/00

  George W. Bush makes false statements all the time, enormous ones
  about issues that really matter to people's lives ...

  RRE 10/9/00

  "Our focus is on their distortions of major issues that actually
  impact people's lives," said Douglas Hattaway, a spokesman for
  Mr. Gore.

  NY Times 10/11/00

An alternative hypothesis, of course, is that my observations were all
obvious, and that any sane person would say the same thing.  But ask
yourself, why don't you read anything remotely similar to my campaign
articles in the newspaper?  It's sure not because of liberal bias.

I'm not pointing out these quotes as a boast.  Rather, as with much of
what I do on this list, I want to point out the possibilities of the
medium.  When I write a political commentary, people often say, "you
should send this to the New York Times".  But my goal is to make those
gatekeepers unnecessary.  There's no substitute for paying people to
write good journalism, and I certainly support that.  The people who
write about computing for the Times are still serious, even if the
political reporting has gone to the dark side of hell.  But I also
want to demonstrate that the Internet can help us interconnect the
world so that the big media are not the only way to get the word out.

**

In reply to my obnoxious claim that the Internet is playing no major
role in choosing the president this year, I received an indignant
message from a reader who works in a hotbed of Democratic moving and
shaking.  She believes that the only reason why Bill Clinton is still
president is because of the Internet's role in getting the truth out
in the depths of his afflictions.  I can certainly believe that the
Internet did play such a role.  What she's leaving out, however, is
the role that the Internet played in creating Bill Clinton's troubles
in the first place.  The whole Whitewater hoax was endlessly stirred
by conspiracy theorists who used the Internet to pool their fetid
imaginations by sending around endless mixtures of fact, distortion,
speculation, rumor, and illogic.  Much of this material bubbled to the
surface in the conservative media, all of it edited to seem plausible
so long as you weren't aware of basic facts, and then it turned into
endless abusive investigations, investigations of the investigations,
and so on.  So what's the balance here?  Whatever it is, the Internet
doesn't come out looking good.  The same kind of junk is circulating
now as part of the campaign of character assassination against Al
Gore.  The difference this time is that the reporters just make up
lies on their own, and borrow them from their colleagues, rather than
getting them off the Internet.  The Internet is truly irrelevant this
time around.  The far right has succeeded through its lies in robbing
Al Gore of the tremendous credit that he deserves in the Internet's
creation.  It's just not clear at this point whether he should want it.

**

Some URL's.

election stuff

Campaign 2000 Online Resource Guide
http://www.rtndf.org/politicsonline/

Gore interview in Red Herring
http://www.redherring.com/mag/issue84/mag-gore-84.html

The George W. Dance  (not polite, needs big computer)
http://www.george-w-dance.homepage.com/

Bush Violated Security Laws Four Times, SEC Report Says
http://www.public-i.org/story_01_100400.htm

Bush Gets a Free Pass from Inquiring Minds
http://www.nyobserver.com/pages/story.asp?ID=3288

The Media, the Schools and Our Man Bush
http://www.star-telegram.com/columnist/ivins2.htm


everything else

E-Rate's Success Silences Critics
http://www.wtonline.com/vol15_no14/cover/1851-1.html

Toymakers Suffering from Technology
http://www.salon.com/mwt/wire/2000/10/13/toy_trouble/

SDMI Hacked
http://www.salon.com/tech/log/2000/10/12/sdmi_hacked/

Why the World Needs Reverse Engineers
http://www.zdnet.com/zdnn/stories/comment/0,5859,2636304,00.html

You Say You Want a Revolution
http://www.reedhundt.com/book/

newsletter on electronic toll collection
http://www.ettm.com/

What Makes an Online Course Succeed?
http://chronicle.com/free/2000/10/2000101201u.htm

IETF meeting, San Diego, December 10-15
http://www.ietf.org/meetings/IETF-49.html

Millennial Reflections on Computers as Infrastructure
http://www.si.umich.edu/~pne/PDF/y2k.pdf

more about campus wireless
http://chronicle.com/free/2000/10/2000101101t.htm
http://chronicle.com/free/2000/10/2000101202t.htm

end