Some notes on new computer users, artificial intelligence, memes, and
attentional economics, plus follow-ups and URL's.


Beginners as a scarce resource.

Forget koala bears.  The endangered species whose conservation most
concerns me is people who have never used computers.  We need them.
Those of us who know how to use computers have been corrupted; we have
accommodated ourselves to bad interface design, conceptual confusion,
primitive operating systems, and ludicrous security and trust models.
Why have we put up with it?  Well, first we assumed that it was our
own fault, and then we forgot about it.  In doing so, we have stunted
ourselves, shriveled our imaginations, and steered our civilization
into a blind alley.  We are in trouble, and our only hope lies in the
few remaining members of our species who have not yet been made stupid.

It's true: people who have never used computers are the last flickering
flame of humanity.  We must sit at their feet and learn what they have
to teach us.  Whenever we invent a new version of our sorry gadgets,
we should respectfully ask a few of these wise people to sacrifice
their minds by learning how to use them.  (I'm amazed that introductory
computer science courses, unlike other bizarre medical experiments,
don't require human subjects releases.)  And as they "learn", as their
minds are slowly taken from them, we should minutely document each step.
Their every "confusion", their every "mistake", is a precious datum.

As my own small contribution to rebuilding our civilization, I want to
sketch a few of the phenomena that I have observed when, forgive me,
I have taught people how to use computers.  I'll start with something
that happens on the Macintosh.  The Macintosh, like every other
computer, is a language machines.  But new users face a chicken-and-
egg problem: they don't know how to use the computer, so they need to
ask questions; but they can't ask questions without knowing what the
words mean; and the only way to learn what the words mean is by using
the computer.  Now, language is about distinctions.  And the Macintosh
interface is governed by several distinctions that sound the same to
beginners but are importantly different.  They are: (1) an application
being open or closed, (2) a window being open or closed, (3) a window
being selected or unselected, and (4) a window being visible or hidden.
Granted, only psychopaths tell beginners about (4).  But that leaves
three importantly different distinctions.

The difference between (1) and (2) is built on another, more basic
distinction, an application versus a window.  One of the very hardest
things for new Macintosh users to understand is that an application
can be open, even though it does not have any currently open windows.
Almost as hard is the idea, built on the distinction among all three
of them, that you can select a different window without closing the
one that's currently selected (and maybe losing all of your work).

So beginners get into the following loop: they start one application,
do some work in the window that gets opened, close that window, start
a different application, do some more work in that new window, close
that window, try to start the first application again, and then get
befuddled when no window opens.  When they ask for help, their desktop
is blank: it has no open windows.  They do not have the language to
explain how it got that way, and the helper starts talking technical
language at them that they don't understand.  (To make matters worse,
the Macintosh is actually inconsistent: some applications allow zero
windows to be open, while others automatically close themselves when
you close their open window, thus helpfully conflating (1) and (2).)

Far from being dumb, the beginner in this case has been following
a perfectly rational beginner-logic, which is based not on concepts
and distinctions but on things they can see and do.  They reason, "to
get it to do this, I do that".  The distinction between an application
and a window is a hard thing to see.  It's only visible if you have
multiple windows open in the same application, which beginners rarely
do, or if you pull down the menu in the upper right hand corner of the
screen, whose meaning is far too abstract for beginners to comprehend.

Here is another thing that beginners do.  They're sitting there at
the keyboard, one hand on the mouse, and you're standing next to them,
"helping" them.  Either they've gotten themselves into trouble, or
they're just learning a new feature of the machine.  In either case,
society has defined the situation, quite perversely, in terms of
your great authority and their shameful cluelessness.  You point at
the screen.  You say "see that box?".  And before you can say another
word, they click on something in the general vicinity of the box.
You say "no".  And before you can say another word, they click on
something else in the general vicinity of the box.  This cycle then
repeats three or four times, with both of you becoming increasingly
agitated as they destroy everything in sight.  They're hearing your
"no, no, no, no" as a God-like rebuke that condemns them to eternal
damnation, and you're watching complete irrationality that you can't
seem to stop.

The underlying problem, evidently, is that the feeling of not-knowing-
what-to-do is intolerable.  They think they ought to know what to do,
and they feel stupid because they don't, so they guess.  And I'm not
just talking about weak-minded, insecure people who lack confidence in
other areas of their lives.  Absolutely not.  There's something about
computers that reduces even powerful human beings to jello.  What's
worse, in many people this great fear of not-knowing-what-to-do is
combined with an equally great fear of breaking something.  (Although
I haven't seen this myself, several people have told me stories about
friends or family members who encountered an error message such as
"illegal operation" and thought that they were going to jail.)  The
resulting mental state must be really something.

The distressing sense of having broken something is in fact something
that I can vaguely recall myself.  The first program that I ever
wrote, circa 1974, was a BASIC routine to print out the prime numbers.
It printed "2", then set a variable to 3, and looped, adding 2 each
time until it became equal to 100, which of course didn't happen.
I was extremely pleased as the 110 baud teletype with its all-caps
print ball and scroll of yellow paper printed out the prime numbers
up to 97, and then I was horrified when it printed 101 and kept going.
The machine was clattering away, there in the terminal room, printing
prime numbers on beyond 200 and 300.  I didn't know how to stop it.
I went to the guy at the help desk, who for some strange reason lacked
my sense of urgency about the problem.  He came over, showed me the
ridiculously obscure Univac code to stop the machine (break @@X CIO
return, if I recall), and walked away.  I was certain that I had used
up all $100 in my computer account, and I started plotting how I was
going to pass the course, given that I wasn't going to be able to use
the computer any more.  So I guess we can sort of remember.  But not
really.  I'm sure I can't get back the full horror of the situation,
much less adjust my methods of teaching to accommodate similar horrors
in the experience of others.
Here's another pattern, one that starts to put beginnerdom in context.
You've got a beginner -- someone who swears that machines spontaneously
break when she walks into the room -- who is married to a gadget freak
-- someone who enjoys having lots of machines around and actually likes
it when none of them quite works.  This happens a lot.  Now, beginners
need stability.  Because of the nature of beginner logic -- "if I do
this, it does that", or what urban geographers call "path knowledge
 -- the beginners don't know what the behavior of the machine depends
on.  So if things change -- configurations, versions, switches, wires,
phone lines, whatever -- then the beginner's empirical regularities
fall apart.  The gadget freak identifies with the inside of the machine
and cannot even see the surface of it; the beginner knows nothing but
the surface of the machine and regards the inside as terra incognita.
That's because the gadget freak knows all of the distinctions that
the machine depends on while the beginner knows none.  The two of
them, freak and phobe, can live in this equilibrium situation forever,
with the freak constantly undermining the stable conditions that the
beginner would need in order to learn.

The situation of the beginner is explicable, finally, in terms of the
largest of technical contexts: the endless clashing of gears among
standards.  I tell students that computers "almost work", because they
are guaranteed to run up against some kind of stupid incompatibility
that the rest of us have long since learned to work around.  The most
common of these stupid incompatibilities might concern character sets.
Think of the gibberish you get when you copy and paste a Word document
into any of a number of other applications with different ideas about
what (for example) a double-quote is.  Prepare a Web page in Word.
Or copy a block of text from a Web page that happens to include an
apostrophe, and then paste it into Telnet.  Granted, these situations
provide professors with opportunities to explain concepts like the
competitive pressures for and against compatibility.  But I've never
met a student who could care less about this topic when they're trying
to get their first Web page working.

Those are a few of the phenomena that provide us with faint clues
about the blessed world of timeless wisdom in which beginners live.
Other clues can be found in beginners' attempts to read computer
manuals, although that's an area where most of us can recover some of
the primal helplessness that is the beginner's lot.  Just today, for
example, I tried to figure out how to update my anti-virus software.
Hoo boy.  I believe that much more research is required in this area.
And it's not just a matter of incremental fixes in the basic windows-
and-mouse interface.  We need to be open to the possibility, indeed
the certainty, that computers as we know them today embody ideas about
people's lives that are shallow, muddled, and immoral.  We will learn
this, if at all, from the "confusions" and "mistakes" of beginners.


Markets are changing drastically with the Internet and other related
technologies, but my impression is that structural problems prevent
the field of economics from understanding the phenomena as fully as it
might.  Economics is divided into two schools: a unified and dominant
neoclassical school and a fragmented and marginal institutional school.
Each school defines the word "market" differently.  For neoclassicists
a market consists of a set of buyers, a set of sellers, and a good.
For institutionalists a market consists of a framework of laws, rules,
customs, and norms.  Neoclassicists treat a technology as a function
that tells you how much output you can produce for a given amount
of certain inputs.  Institutionalists treat a technology as a body
of skill and knowledge that an organization has learned by using it.

You might observe that the schools are both right.  Neoclassicists
oversimplify markets because they treat the institutional framework
as trivial, ignoring its complexities or treating them as transaction
costs.  Institutionalists oversimplify markets by dwelling too much
on the stasis of institutions; they cannot conceptualize the emergent
aspects of market structure.  Neoclassicists are strangely obsessed
with vindicating their premises; institutionalists spend way too much
of their time exploding them.

A middle ground does emerge between the two schools in the depths of
game-theoretic analyses of information.  But everyone's still taking
themselves way too seriously.


How bad philosophy lives through our computers.

The New York Times printed a curiously pointless article the other
day about the current state of research in artificial intelligence.
Do you suppose that this article was related to the forthcoming
Spielberg movie on the subject?  I'm sure we'll see publicity-driven
articles about AI as the movie approaches.  Do you suppose the movie
will include a rational computer that discovers emotions?  I'm bored
already.  At least the Times article had it right: AI turns out to be
a technology of niches.  AI is not about making intelligent computers,
and it hasn't been for many years.  It's about developing a specific
class of algorithms that require data of very particular sorts.  As
sources of data multiply in the world, AI finds more corners where it
can be useful.  Most technologies are like that.  What's misleading is
the idea that AI people produce machines with any sort of generalized
intelligence.  With a small number of non-mainstream exceptions, they
don't even try.  Which is fine, or would be if they changed the name.

I wrote my dissertation in AI, as long-time subscribers to this list
know.  I went into AI because I actually thought that it was a way to
learn about people and their lives.  It is, it turns out, but only in
reverse: the more one pushes against AI's internal logic, the more one
appreciates the profound difference between human beings and any sort
of computers that we can even imagine.  What I really learned during
my time in AI was how to learn things in reverse by intuiting just how
impossible some things really are, and by listening to the technology
as it refuses to do the things it is advertised to do.  I've already
told several stories along these lines, but I haven't told the hardest
one.  It happened in 1985.  I had been experimenting with the simple
practice of writing out very detailed stories from everyday life --
absolutely ordinary events that had happened in the course of washing
the dishes, taking out the trash, walking to work, and so on.  AI had
focused on stereotypically brainy activities like playing chess, but
it was clear even then that something fundamental was missing.  As I
wrote out these stories, I became deeply impressed by their power to
punch through the AI way of talking about people's lives.  It's a hard
thing to explain unless you've done it: the sudden realization that
one's own absolutely commonplace everyday experience contradicts, and
in the most transparently understandable way, everything that one has
learned in school.

I drew a large number of conclusions from these exercises, and it is
important that these conclusions were all intertwined: breaking with
only a few of the standard AI assumptions would not have sufficed,
because they tend to reinforce one another.  Breaking with all of your
field's assumptions at once, however, tends not to be a swift career
move.  First of all, nobody will have the slightest idea of what you
are talking about.  Second of all, it will turn out that you haven't
broken with 100% of the assumptions after all, and that the remaining
assumptions, still unarticulated, will render your revolutionary new
research enterprise internally incoherent.  Third of all, the methods
and techniques and infrastructures and evaluation criteria of your
field will all be geared to the existing assumptions, so that you will
have to reinvent everything from scratch, which of course can't be
done.  In short, you're hosed.

Faced with this sort of impossible situation, slogans are important.
If you can't explain it as engineering, then at least you can put on
a good show.  This is a longstanding tradition in AI, and it was my
only chance.  So I was dead-set on building a system that illustrated
my technical ideas in some "domain" that symbolized ordinary everyday
routine activity -- something that was the symbolic opposite of chess.
That meant -- and this is obvious somehow if you know the history of
AI -- that my program had to make breakfast.  Even if the technical
message didn't connect, the sheer fact that the program made breakfast
would be enough for the time being.

I don't have to go into the technical message of the breakfast-making
project, except for two details.  It was going to be important that
the program decide from moment to moment what it was going to do,
based on real-time perception of the breakfast-making equipment in
front of it.  In other words, even though we think of making breakfast
as something very routine, that routine was going to emerge from a
process that was basically improvised.  It was going to be important
as well that the system not be omniscient: that it be reliant on
its senses for most of its knowledge about the precise states and
arrangements of its materials from moment to moment.

If this latter premise doesn't seem radical, then you don't know much
about AI.  One of the really profound, long-standing problems with AI
is a conflation between the mind and the world.  This confusion takes
many different forms -- so many, in fact, that the problem is almost
impossible to explain accurately.  Suffice it to say that the standard
theory of knowledge in AI is that you have a "world model" in your
head -- something that possesses all of the things and relationships
of the real world, except that it's made out of mind-stuff.  This view
of knowledge is deeply ingrained in the field.  The problem is that,
as a technical matter, it does not work.  Building and maintaining
a world model is computationally very burdensome.  Of course it can
be done if the world is a sufficiently simplified and controlled, and
so AI research has tended to stick with "worlds" that are simplified
and controlled in just the way that world-model-maintaining algorithms
require.  Although these "worlds" do resemble some of the exceedingly
artificial environments of industry, they bear little relationship to
the world of everyday life.

People who care about industry are happy with this arrangement, but I
cared about everyday life.  So I threw out the standard AI theory of
knowledge and perception, and I tried to build entirely new theories,
based on the experiences that I had written down in my notebook.  It
turned out, most surprisingly, that the key is our embodiment: we are
finite; we occupy specific locations and face in specific directions;
and we can deal quite well with our environments without knowing just
where we are, what time it is, the precise identities of everything
around us, and all of the other objective information that is required
to build a world model.  Our knowledge of the world has a superficial
quality: we deal largely with the surface appearance of things as they
are evident to us in our particular standpoints in the world, and we
build more complex and objective representations of the world, such
as paper maps or mental internalizations of activities that involve
reading paper maps, only in relatively rare occasions.  The fact that
we have bodies and live immersed amidst the things of the world turns
out to be important from a computational perspective: being embodied
and located, and dealing with things as they come, and as they present
themselves, is computationally easier than building and manipulating
world models.

This is a deep and weird idea, but it remains nothing but a woolly
intuition (or so AI says) until one builds a system that concretely
illustrates it in practice.  So I set out to build a system that,
being embodied amidst the physical things of breakfast-making, could
make breakfast.  At first I explored doing all of this robotically.
We had robots; we had software to drive the robots; we even had robots
with three-fingered hands, which was easy enough mechanically but not
at all easy mathematically and computationally.  So I thought for a
while that my dissertation would be about the theory of fingers, and
I made several studies of people picking up telephones, holding forks,
and otherwise exercising their fingers.  It quickly became apparent
that people were taking advantage of every single property of their
hands, including the exact degree of "stickiness" and "stretchiness"
of their skin -- neither of which was even remotely amenable to known
computational methods or implementable in known materials on robots.

The computational impossibility of the things people were doing
with their hands was intriguing: either people were doing insanely
complicated mental calculations of the stretchiness of their skin, or
else people learned how to use their hands by trial and error without
ever doing such calculations.  Because the latter theory makes much
more sense, I set about theorizing the paths by which people evolved
routine ways of using their hands.  (By "evolved" I don't mean in
genetic terms but in the course of their individual lives.)  I became
an expert on the (strikingly unsatisfactory) physiology of the hand,
and I read the literature on the neurophysiology of hand control.
In short order these exercises produced interestingly generalizable
principles for the evolution of routine ways of doing things, which
I set about codifying.

By now, of course, the thought of implementing any of this on a
real robot was hopelessly impractical.  So, following another long
tradition in AI, I turned to simulation.  I wrote a series of programs
that simulated the movements of human hands, whereupon I discovered
that the impossible-to-compute interactions between the skin of
people's hands and the tools and materials of breakfast-making were
still impossible to compute, and therefore impossible to simulate.
Having been chased entirely out of the territory of studying how
people used their hands, I fastened on the problem of modeling the
processes by which routines evolve.  I made numerous detailed studies
of this question, again based on story-telling work in my notebook.
It became clear that complex forms of routine activity, such as making
breakfast, evolve over long periods, and that the successive steps in
the evolutionary process could be more or less categorized.  This was
an excellent discovery, but it was still just an idea in my notebook,
rather than something I knew how to build on a computer.

I tried again, this time building a program to simulate a simplified
"breakfast world".  This, too, is a time-honored tradition in AI: the
simplified simulated world that you can pretend captures the essence
of the real-world activity you claim to be interested in.  The problem
(as you already know) is that the simplifications distort the reality
to make it fit with the computational methods, which in turn are badly
wrong-headed.  And so it was in my own project.  I couldn't simulate
hands, so the "hand" became a simple pinching device that had magical
abilities to grab whatever it wanted and exert whatever leverage it
wanted upon it.  But then I had to simulate things like the pouring of
milk and cornflakes, the overflowing and spilling of containers, the
mixing of things that were poured together, the dropping of things on
floors, and all of the other events that would make life in breakfast
world meaningful.  Well, the physics of spilling, dropping, mixing,
and all the rest is nearly as bad as the physics of skin, and so all
of that stuff got simplified as well.

Other simplifications slipped in along the way.  When you or I make
breakfast in the real world, we are an arm's-length from most of the
materials and equipment involved (dishes, corn flakes, sink, toaster,
table, utensils, countertops, etc).  We are (as I insisted upon above)
*amidst* those things.  In particular, we see them in perspective
projection.  They occlude one another.  When we move our heads, they
take up different relationships in our visual fields, and the visual
cortex (which is in the back of your head) does a lot of complicated
work on the images that result.  The visual cortex does not produce
anything like a three-dimensional model of the whole world, and indeed
one of the research questions is where exactly the boundary is located
between the automatic, bottom-up, reflex-like processing of the visual
cortex and the active, top-down, cognitively complicated, improvised
kind of thinking that makes use of the results (which happens in the
front of your head).  But before I could answer this question, I had
to simulate the real-world processes of perspective projection that
cause those breakfast-images to be formed on your retina in the first
place, as well as the first several stages of automatic processing of
them.  All of which was, of course, insanely difficult in computational
terms.  Again I had to simplify.  Out went the perspective projection.
The simulated robot now looked out at breakfast-world in orthogonal
projection, meaning that it was no longer "amidst" the breakfast-stuff
in any meaningful sense, and furthermore that breakfast-world had now
become two-dimensional.  And the simulation now happened in high-level,
abstract terms of the sort that low-level vision might be imagined to
deliver as its output, and not in anything like the real messy physics.

I despaired because I saw what was going on: all of the traditional
AI assumptions that I was trying to reject were creeping back into
my project.  My breakfast-making "robot" no longer had a body in any
meaningful sense.  Its "hands" were caricatures that just hung there
in space.  All of the materials and equipment of breakfast-making had
become so simplified that there was no longer anything much for anyone
to learn about them.  States of the world had become discrete and
well-defined in a way that was completely artificial.  In fact, the
"world" had become indistinguishable from AI's idea of a simplified
mental representation of the world.  Things were not going well.

All through this story, I was explaining the grand philosophy and
infuriating day-to-day logic of my project to another student, David
Chapman.  Being possessed of a pragmatic streak that I lack, David
observed that breakfast-world was being relentlessly transformed into
a video game.  He liked the whole story about improvised activity and
perceptual architecture, and one day he was in Santa Cruz watching
someone play an actual video game when inspiration struck; he went
home and wrote the program that I had been trying to write, except
that the program played a video game rather than making breakfast.
Programming environments were sufficiently powerful back then (this
was in the age before Microsoft when giants strode the earth and Unix
was regarded as a joke) that the task wasn't even particularly hard.

I was horrified, of course, the whole everyday-life breakfast-making
symbolic-ideological dimension of the project having been exchanged
for a "domain" that involved killing (albeit penguins who kick ice
cubes at attacking bees).  But when you're an engineer it's hard to
argue with results, and David had results where I had none.  I gave
up.  We then sat down in David's office, talked through a series
of video-game scenarios in terms of their (tangential, problematic,
and confusing) relationship to the whole everyday-routine-activity
theory, extended the program a bit, and were simultaneously impressed
and disgusted with ourselves.  I went home in a state of exasperated
resignation, and David wrote a conference paper about the program and
put my name on it.  That paper is now one of the most cited papers in
the history of the field, though not always for the reasons we would
have hoped.  The visual-system architecture that we pioneered is now
conventional in the field, and we sometimes even get credit for it.
But the whole philosophy of computation, improvisation, knowledge, and
people's everyday lives is still the province of armchair philosophers.

What happened?  One problem was technological.  Robot manipulators
and visual systems weren't remotely well enough developed.  Computers
weren't powerful enough to perform the simulations needed to do a real
job of breakfast-world, and I had to write layers of general-purpose
simulation software packages that in a perfect world would already
exist.  Another problem was scientific: although the scientists had
pretty compelling hypotheses about the architecture of the relevant
neurophysiology, they didn't know the details, and they certainly
didn't have the canned software routines that I needed to simulate it.

Those problems were enough to drive me toward trivialized, schematic
simulations rather than real robotics in the real, physical world.
In particular, they were enough to drive me toward simulations of
breakfast-world that looked identical to the standard AI story about
mental world models.  Whether the simulation of breakfast-world
was part of my simulation of the human mind or part of my simulation
of the world that the simulated mind interacted with was pretty
much arbitrary.  This happens a lot in AI: the engagement between
simulated-person and simulated-world is a software construct, and
nothing about the software forces anyone to keep the simulated mind
and world separate, or to interpose complex perceptual and motor
apparatus between them.

Those dynamics would have been enough to wreck my project, but I think
that the problem actually runs deeper.  Let us return to my theory.
As people, we live in a world that is complicated and only moderately
controlled; as a result, our actions are necessarily improvised to a
great extent and our knowledge of the world is necessarily superficial
and incomplete.  Industrial machines, by contrast, live in a world
that is rather simple and highly controlled; as a result, actions can
be planned to a considerable extent and knowledge can be just short
of complete.  Industrial machines, moreover, are generally not mobile.
Their relationships to their environments are generally stable, and so
they are rarely in any doubt as to the identities of the things they
are dealing with.  The main exceptions are parts-sorting tasks that
are themselves highly controlled.  For decades now, the philosophy of
industrial control has been to maintain a correspondence between the
physical world and a digital model of that world; that is what control
theory is about, and that whole approach feeds naturally into the AI
methodology of creating and maintaining world models.

And this is not just about AI: all of computer science works this way.
Ever since it began, computer science has been geared to creating and
maintaining detailed models of the world.  At least control theory
(known to most people as cybernetics) posits a constant, moment-to-
moment interaction between controller and plant; it is a relationship
of control rather than reciprocity, but at least it's a real-time
engagement of some sort.  Computer science doesn't even make this kind
of relationship of control easy; the interaction between the digital
realm inside the machine and the physical and social world outside the
machine has always been fraught, to the extent we consider ourselves
lucky that we can deal with computers through primitive keyboards and
buttons.  Descartes had a hard time explaining the relation between
the mind and the physical world; it had to do with the pineal gland,
which somehow, uniquely among all organs, had the capacity to interact
causally with both the spiritual realm of the mind and the physical
realm of the body.  The keyboard, windows, and mouse are the pineal
gland of the modern computer.

The great themes of intellectual history are not conspiracies; they
are not even matters of choice.  They reproduce themselves on their
own steam, coded not into single words or propositions but into what
Nietzsche famously called "a mobile army of metaphors, metonyms, and
anthropomorphisms -- in short, a sum of human relations, which have
been enhanced, transposed, and embellished poetically and rhetorically,
and which after long use seem firm, canonical, and obligatory".  We
do not hover above history watching this army's history; we don't even
live on the ground with the army marching past us.  Rather, the army
marches through us and by means of us.  It is reproduced in our gossip
and our dissertations, in our politics and our job interviews, in our
computer programs and the language by which we set about selling them.
And exactly because computers are language machines, the great themes
of intellectual history are reproduced through our machines, coded
into them beneath a veneer of logic, so that another generation can
think that that they are inventing it all for the first time.


Large numbers of .com companies are trading for well under $1 a share.
It wouldn't cost much to buy them all and shut them down.  But it's
okay: every important new technology goes through this stock-bubble
phase, transferring wealth from the dumb to the quick.  The point is
to get back to the democratic vision of the Internet that was common
sense before the failed experiment with the advertiser-supported Web
got started.  There's nothing wrong with someone making an honest buck
on the Internet, of course.  What's wrong is identifying the making of
bucks as the essence of the medium.


Concepts as raw material.

My research has evolved to the point that it's unclassifiable.  One of
my own colleagues told me last week that it isn't even research!  That
was nice.  I suppose you could say that I'm a theorist, except that
(1) the word carries so many bad connotations, (2) I'm not interested
in the problematics that organize research in social theory these days,
and (3) I would rather write in broadly accessible language than in
theory-speak.  But you know, that's what tenure is for.  I'm making a
difference and if my work doesn't fit the usual classifications then
that's how it goes.  I'll just call what I'm doing "public design" and
get on with it.

I listed some of the tenets of public design last week.  This week I
want to talk about the raw material of public design, namely concepts.
By "concepts" I really mean "analytical categories", that is, concepts
that are thoroughly embedded in the history of social theory, but that
only fully take on a meaning when they are applied to the analysis of
particular cases.  A public designer fashions concepts that pose the
kinds of questions that the designers of new information technologies
need to ask, including the questions that they haven't thought of yet.
It's important, in my view, to be knowledgeable in the ways of concepts,
and to be good at choosing among them.  To show what I mean, I want
to explain what's wrong with two widely-used concepts, "memes" and
"attention economics", and what it would be like to start fixing them.

The concept of a "meme" starts life inauspiciously by analogy to a
terribly reductionist theory of genetics, namely that genes are out
for themselves and organisms exist to reproduce them.  Now, it is
not at all clear that the concept of "genes" is even a useful way to
explain the seriously complicated things that go on with reproduction
(see Evelyn Fox Keller's new book), but put that aside.  The idea
is that genes are abstract units of genetic material (DNA sequences
or something more complicated) that somehow code for equally abstract
attributes of an organism.  Some genes die out when they are selected
against, and so the genes that are still around are doing something
right.  Maybe organisms are just tricks that genes use to reproduce
themselves.  Alright.  "Memes" are supposed to be analogous: discrete
units of intellectual material (sequences of words or something
more complicated) that play certain unspecified roles in human life.
Some memes die out when cultures don't pass them along, and so the
memes that are still around are doing something right, and maybe the
social world exists as a set of tricks that memes use to reproduce

It's clever in an adolescent way, this inversion of the usual story.
What it really does, this inversion, is to point out that the usual
story is itself inadequate.  Obviously one needs a story that moves
on different levels at once.  A biological theory needs levels, such
as ecology, physiology, and genetics.  The necessary theory on each
level must explain what it demands of the levels below and above it.
This is hard work, both because of the difference in scale and because
the concepts required to explain biochemistry (for example) differ in
kind from those required to explain ecology.

It's the same with memes.  You can *say* that culture is meme's trick
for reproducing themselves, but that doesn't tell us anything about
how culture works.  As with biology, you need a theory that operates
on several levels and explains the relation between them.  What's
more, when you actually start the task of explaining the full range
of social mechanisms by which memes reproduce, the concept of a meme
starts to break apart.  That's because there are lots of different
social mechanisms, whose principles of operation are quite diverse.
A good way to talk about this diversity is in terms of institutions.
Every institution defines its own complex of roles and relationships,
its own repertoire of activities and genres, its own body of rules
and customs, and so on.  In particular, an institution *defines* the
people, places, and things that take place within it.  You don't have
doctors without a medical system, or students without a school system,
or intellectual property without intellectual property law, just as
you don't have a pitcher without baseball or a goalie without soccer.

Every institution, in other words, defines its own variety of memes,
and the dynamics into which memes enter might differ considerably
from one institution to the next.  We can make this more concrete
by pointing at the diverse genres of communication that institutions
employ.  Every meme, when it actually gets communicated, takes some
generic form: an aphorism, a political speech, a knock-knock joke,
a theorem, a blues song, a tragedy, etc.  Each of these genres plays
its own part in a certain institution, or in a fairly small number of
institutions.  The social mechanism by which a meme of the blues gets
reproduced is quite different from the social mechanism by which a
meme of mathematics gets reproduced.  You have different social roles,
different kinds of publication and performance, different incentives,
different rules for who gets to quote whom and how, different kinds
of influence and fashion, and so on.  The concept of "meme" promises
that we can formulate meaningful generalizations about the methods by
which memes reproduce, but once we investigate the question seriously
it becomes unclear whether such generalizations are likely to exist.

Does that mean that we can generalize about institutions instead,
that institutions and not memes are actually the natural kinds of
the social world?  Yes, actually, up to a point.  We can observe that
institutions tend to reproduce themselves, and we can list some of
the typical mechanisms by which they do so, for example by shaping the
cognition of their participants.  But "institution" is an analytical
category, and not the sort of concept they have in physics, and so the
real action only takes place when the categories and generalizations
are brought to bear in analyzing a particular case.  We can study how
the university reproduces itself, or political parties, or the stock
market, and we can also study how those institutions change over time,
and all of the many generalizations about institutions can serve as
rules of thumb and as rough templates that suggest questions to ask
and the general form of explanations to offer, assuming of course the
facts turn out to fit the patterns that the theory suggests.  What
happens in practice is that the institution is a sprawling place, and
with a good strong concept you can find *something* in the institution
that fits the pattern.  Then someone else working from a different
set of categories finds something different -- not incompatible,
one hopes, but a different take on things -- and then some theorist
becomes renowned by explaining how the different analytical approaches
can be reconciled.  In classes I give students a handful of concepts
each week, tell them to write me 1000 words about how those concepts
apply to the particular institution they know about, and then in the
last few weeks I tell them to look at the overall picture that has
emerged week-by-week and write thirty pages about it.  This works.

So you see what's wrong with the concept of a "meme" -- why it does
not function adequately as an analytical category.  Observe that I
did not refute the "theory of memes" by providing contrary evidence to
it.  Rather, I explained why a "theory of memes" isn't even possible,
at least not in a way that would be worth bothering with.  Of course,
it is still possible that someone will discover strong generalizations
about memes that cut across institutional fields in a way that is
entirely unforeseen.  Zipf's Law turns up in the darnedest places,
so who knows.  But from the perspective of the savoir-faire about
concepts that has been built up in the practice of public design, it
does not seem likely.

The critique of attention economics is similar, and you can probably
recite it as well as I can.  The core argument is that people's minds
need to focus, that we have a hard time doing several things at once,
and that "attention" is therefore a scarce commodity.  Everyone has
only 24 hours in a day, and so the situation invites economic concepts
of the allocation of scarce resources.  I have a limited amount of
attention to allocate, and so I devise a strategy, make choices, trade
some options against others, and so on.  Starting from this point,
it is easy enough to spin out a whole abstract theory of attentional

The problem is the car wreck that follows immediately when we try
to apply this abstract theory to real situations in the world.  The
things we attend to are diverse, and they are diverse in the same way
that memes are diverse: the school, things we attend to are different
from the family things, which are different from the legal things, and
so on.  They are "different" in all of the ways that institutions are
different, but in particular they are different in the particular ways
that attentional economics cares about.  Some of them are long-term
states that we can remain adequately aware of without ever devoting
any particular moment to them, just because the necessary information
can be inferred from information that comes to us for other purposes.
Others are more like patterns that emerge from information that we
encounter by chance -- not randomly, since the world around us has its
regularities that makes exposure to the pattern predictable, but not
by any explicit decision to allocate our attention either.  And there
are lots of other configurations -- different ways in which we can be
said to be "aware" of things in the context of particular institutions
with their particular kinds of roles and relationships and situations.

Now, you could say that this kind of diversity is exactly what the
theory of attentional economics is about.  But the point is, you are
not going to be able to explain the substance, the particulars, of the
"attentions" that people maintain, without going into the particulars
of the institutions that define them.  The same thing goes for the
theory of memes: you can attempt a taxonomy of the different ways in
which ideas get reproduced across the diverse environments provided
by different institutions.  But you are going to have to characterize
that diversity.  The point is not that it's impossible: of course you
can study attention, and of course you can study ideas and how they
get reproduced.  What's required is to take institutional concepts as
a starting-point, and then construct the needed concepts within that
analytical context.  Will you have any generalizations when you get
done?  No, if you mean generalizations in the sense of physics or even
the sense of neoclassical economics with its aggressive leveling of
diverse social phenomena to an old-fashioned physics of equilibrium.
Yes, if you mean analytical frameworks that can be taken to particular
cases and used to make sense of the attentional and intellectual
phenomena that are to be found constituted within those particular
institutionally organized settings.

To construct this analytical framework, you will definitely need to
conceptualize the diversity of forms that attention and ideas take
in different institutional settings, and this is what concepts like
genre are for.  Perhaps you can start developing additional concepts
that help describe what you see in the particular cases you study.
That will be good.  But it will be quite different from the theories
of memes and attentional economics, which try to go it alone, building
whole theories on the slender foundations of single small ideas taken
out of all context.  It takes more than a few concepts to describe
social life.  It takes dozens, easily.  But the best place to start
is with what I call medium-sized concepts: concepts like institution
and genre that differentiate the various contexts of human life in
a way that supports large numbers of generally useful rules of thumb.
There are lots of general lessons to learn about public design, but
that's one of them.


People read here and elsewhere about digital civil liberties issues
and they ask me "what can I do?".  My answer is, "pick something".
I tell them pick some a specific small issue, learn all about it, set
up a Web page, make contact with others who are doing the same thing,
make sure that their level of commitment is sustainable, and settle
in for the long haul.  The goal is to know more and last longer than
the other side, spread information, make a nuisance of yourself, keep
it on a low boil, don't get burned out, and the world will slowly
catch up with you.

A good example of this strategy is the following article:

It's about a retiree who has committed himself to the cause of public
access to police records.  Granted, this is a complex issue, one of
those classic open-records-versus-privacy things, but one where there
are plenty of clear-cut areas where the public ought to be able to
get access to information that their tax money is paying to produce.
Of course the cops call him a nut, complain that he's never satisfied,
etc.  But I'm sure he regards such comments as the rewards of the job.

The importance of the advice to "pick something" is profound.  When
you first develop a concern with a political issue, your unconscious
mind is telling you that you're all alone and that it's you against
the whole world.  Sure, you know about the ACLU.  You read the news.
But that doesn't affect your basic belief system.  What affects your
basic belief system is picking an issue and then making contact with
the other people who have picked issues.  Once you feel yourself part
of a network, and once you feel the positive energy that flows in a
network of like-minded issue advocates, then your belief system will
sort itself out and you'll believe in democracy.  The problem lies in
the gap between your initial sense of existential isolation and your
eventual hooking-up with the like-minded.  I wonder how we can use the
Internet to help people bridge that gap.  What if everyone who calls
the ACLU could get referred to a low-overhead online institution that
suits them up for combat and wires them with a hard-bitten network of
allies in one minute flat?  Once the news spreads through the culture
that such a thing is possible, lots more people will step forward.


An article in the New York Times heralds the arrival of 400-channel
digital Time-Warner television to New York:

I hate television, if that doesn't make me some kind of elitist jerk,
and the thought of 400 channels is about the most repulsive thing I
can imagine.  But hey, if someone else finds value in it then who I
am I to judge.  Something does bother me about the NY Times article,
though, and I suspect about the reality that it reports: its constant
mention of addiction:

  If more people seem to be walking around the city glassy-eyed these
  days, it may be a symptom of remote-itis.  Or more precisely, from
  the more than 400 channels of television that come from the click of
  those buttons.

  DTV is quickly becoming New Yorkers' latest addiction.

  Dawn and Sal Fahrenkrug of Flushing, Queens, and their 13-year-old
  daughter, Dara, say they did not use to watch much television.
  Then they joined the DTV cult.  "I couldn't live without it," said
  Ms. Fahrenkrug ...

  Many people say they still watch the same amount of television,
  although the newfound variety has turned them into addicts, watching
  everything from Animal Planet to Sino Television.

  Time Warner has decided to charge digital subscribers less per
  premium channel (a couple of dollars per month, instead of $6.95),
  on the assumption that the new options there are about 200 mean
  that consumers will sign up for many, many more.  In short, they
  are banking on addiction.

  Sarah Gibson, a convert in Murray Hill, finds herself in a trance
  flipping from channel to channel.

Call me what you will, but I don't think addiction is funny.  Either
these people are just making a joke, in which case it's not a good
joke, or they do really mean addiction, in which case it's really not
a good joke.  And it's not just about digital television.  I've been
in the room when corporate marketing people talked seriously about
wanting to make their product addictive, as if this were a good thing.
Of course it *is* a good thing from a financial standpoint, if you're
the dealer.  But it's not a good thing in terms of the lives it ruins,
the minds it blots out, the families it wrecks, the good deeds that
go undone, and the general level of blight that addiction brings to
the culture.  People in 12-step groups often see the whole culture in
terms of addiction, and I know just what they're talking about.  When
you've got thousands of advertising campaigns promoting addictive
thought-forms and behavior, you can't possibly have a healthy culture.

Now, I realize that television, like most addictive things, also has
healthy uses.  And I favor banning or medicalizing addictive things
that have no useful purpose.  Television is protected by free speech,
so our queries about this talk of addiction aren't going to rise to
the level of banning anything.  Even so, I don't want to think that
a world of infinite bandwidth is also a world in which every last
potential for addiction has been thoroughly exploited.  And even if
it's not, I don't want people going around making jokes about it.

(Someone's going to ask me what I thought of Steven Soderbergh's new
film, "Traffic".  "Traffic" is an intelligent, serious, involving,
technically sophisticated film, worthier than anything the studios
normally make.  The film interweaves four stories, the best of which
stars Benecio Del Toro, who is stunning as a Mexican cop, and its
attempt to paint a sprawling canvas of the drug war sometimes works.
But it also suffers from a weak script with generic dialogue, flat
characters, excessive speechifying, and plot holes you could drive
a truck through -- the action too often turns on people being two
notches dumber than they would be in real life.  Ultimately the drug
war sweeps the director away just as it sweeps away his characters,
and the film ends up feeling indecisive.  As with a lot of good
Hollywood movies, you'll enjoy yourself if you turn your brain off.)


In my comments on Kahn and Wiener's "Year 2000" book, which appeared
in 1967, I asserted that Moore's Law originated at a later date.  It
would seem that I was wrong; Intel's Web site claims that Moore first
presented his Law in 1965:

Intel, though, starts its graph only with the 4004, which was circa
1972, not acknowledging the particular computers that Moore must have
had in mind.  In fact I had thought that Moore formulated his law in
1972, so maybe I was actually thinking of the 4004's introduction.

It also seems that I was uncharitable to Kahn and Wiener on a closely
related point.  They regarded as a "conservative" a prediction that
"computer capacities [would] continue to increase by a factor of ten
every two or three years until the end of the century".  That's wrong
if "computer capacities" means processor speed.  But on the previous
page (as I had noted) they defined computer capacity as the product
of processor speed and memory capacity.  By that measure they were
about right, with computers becoming more powerful by something in
the vicinity of 15 orders of magnitude.  Since Moore did predate them,
I assume that they were following his estimates.  The remainder of my
assessment still holds, however.  For example, the idea that computers
would become intelligent after a 15-orders-of-magnitude improvement
(by their measure) was entirely unreasonable, and did not come true. 


Some URL's.


Justice Unrobed

Untruisms of 2000

High Court's Florida Decision Was Based on Political Distortion

Right and Wrong

Almost Everything We Thought About the Florida Recount Is Wrong!

Black Precinct in Gulf County Theorizes about Botched Ballots

Blacks' Votes Were Discarded at Higher Rates, Analysis Shows

If the Vote Were Flawless...

Bush's Coup: The Fall of Democracy in America

A Badly Flawed Election

A Racial Gap in Voided Votes

other political stuff

Photographs of Signs Enforcing Racial Discrimination

Keeping an Eye on the Conservative Internet Media

origins of the Whitewater hoax

theological papers on projective identification

intellectual property

Copy Protection for Prerecorded Media

The Web in 2001: Paying Customers

Hatch's New Tune

everything else

The Future of Telecommunications and Networking

"The depth and degree of the universal hatred of unsolicited
commercial e-mail by Internet users is amazing."

the Pentium IV mess

Group Plans to Launch Online K-12 Curriculum

Companies Turning Cool to Telecommuting Trend