Keynote Speech IV

Theme: The Body and Technologies

1

Keynote Speech IV

Computation as Salvation: Awaiting Binary Bodies

Hubert Dreyfus

Professor of Philosophy

University of California, Berkeley

Abstract

According to Ray Kurzweil: “Once computing speed reaches 1016 operations per second—roughly by 2020—the trick will be simply to come up with an algorithm for the mind. When we find it, machines will become self-aware, with unpredictable consequences.” This event is known as the technological singularity. Wired Magazine tells us:There are singularity conferences now, and singularity journals. There has been a congressional report about confronting the challenges of the singularity, and [in 2007] there was a meeting at the NASA Ames Research Center to explore the establishment of a singularity university. Singularity University preaches that one day in the not-so-distant future, the Internet will suddenly coalesce into a super-intelligent Artificial Intelligence. Computers will become so powerful that they can model human consciousness. This will permit us to download our personalities into nonbiological substrates. When we cross this bridge, we become information. Then our bodies will be digitalized the way Google is digitizing old books, so that we can live forever as algorithms inside the global brain. And then, as long as we maintain multiple copies of ourselves to protect against a system crash, we won’t die.
This current excitement is simply the latest version of a pattern that has plagued work in Artificial Intelligence since its inception. Marvin Minsky, Director of the MIT AI Laboratory, predicted: “Within a generation we will have intelligent computers like HAL in the film 2001: A Space Odyssey.” But Minsky’s research program failed and is now known as Good Old Fashion AI. Rodney Brooks took over at MIT. He published a paper criticizing the GOFAI robots that used representations of the world and problem solving techniques to plan the robot’s movements. Rather, he reported that, based on the idea that the best model of the world is the world itself, he had “developed a different approach in which a mobile robot uses the world itself as its “representation” – continually referring to its sensors rather than to an internal world model. Brooks’ approach is an important advance but Brooks’ robots respond only to fixed isolable features of the environment, not to context. It looked like AI researchers would have to turn to neuroscience and try to, as Kurzweil put it, reverse engineer the brain.
But modeling the brain’s with its billions of neurons with on the average 10 thousand connections on each may well require more knowledge than we now have or may ever have of the functional elements in brain and how they are connected. If so, trying to “reverse engineer” the brain does not look promising. So Kurzweil had another idea. Sincethe design of the brain is in the genome, we could use our enormous computing power to model the brain DNA and then use that model DNA to grow an artificial brain. This seemed like a relatively sensible proposal but developmental neuroscientists are outraged. Here’s a typical response: Kurzweil knows nothing about how the brain works. Its design is not encoded in the genome: what's in the genome is a collection of molecular tools … that makes cells responsive to interactions with a complex environment. The brain unfolds during development, by means of essential cell to cell interactions, of which we understand only a tiny fraction. We have absolutely no way to calculate … all the possible interactions and functions of a single protein with the tens of thousands of other proteins in the cell.
Why then are Kurzweil’s speculations concerning the singularity accepted by elite computer experts and by seemingly responsible journalists? It seems to be the result of poor logic driven by a deep longing. Here religion and technology converge. As one author puts it, “the singularity is the rapture for nerds.” Hardheaded naturalists desperately yearn for the end of our world where our bodies have to die, and eagerly await the dawning of a new world in which our bodies will be transformed into information and so we will achieve the promise of eternal life. As an existential philosopher, I suggest that we should give up this desperate attempt to achieve immortality by digitalizing our bodies and, instead, face up to our finitude.

I.Introduction

There is a new source of excitement in Silicon Valley. According to Ray Kurzweil:

Once computing speed reaches 1016 operations per second—roughly by 2020—the trick will be simply to come up with an algorithm for the mind. When we find it, machines will become self-aware, with unpredictable consequences. This event is known as the technological singularity.[1]

Kurzweil’s notion of a singularity is taken from cosmology, in which it signifies a border in space time beyond which normal rules of measurement do not apply (the edge of a black hole, for example).[2]

Kurzweil’ excitement is contagious. Wired Magazine tells us:

There are singularity conferences now, and singularity journals. There has been a congressional report about confronting the challenges of the singularity, and [in 2007] there was a meeting at the NASA Ames Research Center to explore the establishment of a singularity university. … Attendees included senior government researchers from NASA, a noted Silicon Valley venture capitalist, a pioneer of private space exploration and two computer scientists from Google.[3]

In fact:

Larry Page, Google’s … co-founder, helped set up Singularity University in 2008, and the company has supported it with more than $250,000 in donations.”[4]

Singularity University preaches a somewhat different story from the technological singularity which requires the discovery of an algorithm. It goes like this: one day in the not-so-distant future, the Internet will suddenly coalesce into a super-intelligent Artificial Intelligence, infinitely smarter than any of us individually and all of us combined; it will become alive in the blink of an eye, and take over the world before humans even realize what’s happening.

That is our bodies will be digitalized the way Google is digitizing old books, so that we can live forever as algorithms inside the global brain. In the technological the world envisaged by Kurzweil and the Singularians

[C]omputers [will] become so powerful that they can model human consciousness. This will permit us to download our personalities into nonbiological substrates. When we cross this…bridge, we become information. And then, as long as we maintain multiple copies of ourselves to protect against a system crash, we won’t die.[5]

Yes, it sounds crazy when stated so bluntly, but according to Wired these are ideas with tremendous currency in Silicon Valley; they are guiding principles for many of the most influential technologists.

I have no authority to speak about the possibility that computers will digitalize our bodies thereby offering us eternal life, but I do know a good deal about the promises and disappointed hopes of those who have predicted that computers will soon become intelligent. I hope to save you from rushing off to Singularity University by recounting how the current excitement is simply the latest version of a pattern that has plagued work in Artificial Intelligence since its inception. To judge whether the singularity is likely, possible, or just plain mad, we need to see it in the context of the half-century long attempt to program computers to be intelligent. I can speak with some authority about this history since I’ve been involved almost from the start with what in the 50ies came to be called AI.

II. Stage I: The Convergence of Computers and Philosophy

When I was teaching Philosophy at MIT in the early sixties, students from the Artificial Intelligence Laboratory would come to my Heidegger course and say in effect: “You philosophers have been reflecting in your armchairs for over 2000 years and you still don’t understand how the mind works. We in the AI Lab have taken over and are succeeding where you armchair philosophers have failed. We are now programming computers to exhibit human intelligence: to solve problems, to understand natural language, to perceive, to play games and to learn.” Phil Agre, a philosophically inclined student at the AI Lab at that time, later lamented:

I have heard expressed many versions of the proposition …that philosophy is a matter of mere thinking whereas technology is a matter of real doing, and that philosophy consequently can be understood only as deficient.

I had no experience on which to base a reliable opinion on what computer technology could and couldn’t do, but as luck would have it, in 1963 I was invited by the RAND Corporation to evaluate the pioneering work of Alan Newell and Herbert Simon in a new field called Cognitive Simulation. Newell and Simon claimed that both digital computers and the human mind could be understood as physical symbol systems, using strings of bits or streams of neuron pulses as symbols representing the external world. Intelligence, they claimed, didn’t require a body, but merely required making the appropriate inferences from internal mental representations. As they put it: “A physical symbol system has the necessary and sufficient means for general intelligent action.”[6]

As I studied the RAND papers and memos, I found to my surprise that, far from replacing philosophy, the pioneers in CS had learned a lot, directly and indirectly from the philosophers. They had taken over Hobbes’ claim that reasoning was calculating, Descartes’ idea that the mind manipulated mental representations, Leibniz’s idea of a “universal characteristic”—a set of primitives features in which all knowledge could be expressed—Kant’s claim that concepts were rules, Frege’s formalization of rules, and Russell’s postulation of logical atoms as the building blocks of reality. In short, without realizing it, AI researchers were hard at work finding the rules and representations needed for turning rationalist philosophy into a research program.

At the same time, I began to suspect that the critique of rationalism formulated by philosophers in existentialist armchairs—especially by Martin Heidegger and Maurice Merleau-Ponty—as well as the devastating criticism of traditional philosophy developed by Ludwig Wittgenstein, were bad news for those working in AI. I suspected that by turning rationalism into a research program, AI researchers had condemned their enterprise to reenact a failure.

III. Symbolic AI as a Degenerating Research Program

It looked like the AI research program was an exemplary case of what philosophers of science call a degenerating research program. That is a way of organizing their research that incorporated a basically wrong approach to its domain so that its predictions constantly failed to pan out, and whose believers were ready to abandon their current approach as soon as they could find an alternative. I was particularly struck by the fact that, among other troubles, researchers were running up against the problem of representing relevance in their computer models– a problem that Heidegger saw was implicit in Descartes’ understanding of the world as a set of meaningless facts to which the mind assigned what Descartes called values.

Heidegger warned that values are just more meaningless facts. To say a hammer has the function of hammering leaves out the relation of hammers to nails and other equipment, to the point of building things, and to the skills required when actually using a hammer. Merely assigning formal function predicates like “used in driving in nails” to brute facts such as hammers weigh five pounds, couldn’t capture the meaningful organization of the everyday world in which hammering makes sense.

But Marvin Minsky, Director of the MIT AI Laboratory, unaware of Heidegger’s critique, was convinced that representing a few million facts about a few million objects would solve what had come to be called the commonsense knowledge problem. In 1968, he predicted: “Within a generation we will have intelligent computers like HAL in the film 2001: A Space Odyssey.”[7] He added: “In 30 years [i.e. by 1998] we should have machines whose intelligence is comparable to man’s.[8]

It seemed to me, however, that the deep problem wasn’t storing millions of facts; it was knowing which facts were relevant in any given situation. One version of this relevance problem was called “the frame problem.” If the computer is running a representation of the current state of its world and something in the world changes, how does the program determine which of its represented facts can be assumed to have stayed the same, and which would have to be updated? For example, if I put up the shades in my office which other facts about my office will change. The intensity of the light, perhaps the shadows on the floor, but presumably not the number of books.

Minsky suggested that, to avoid the frame problem, AI programmers could use what he called frames—descriptions of typical situations like going to a birthday party in which only relevant facts were listed. So the frame for birthday parties for example, after each new guest arrived, required the program to check those and only those facts that were normally relevant to birthday parties— the number of presents for example, but not the weather —to see if it had changed.

But a system of frames isn’t in a situation, so in order to select the possibly relevant facts in the current situation one would need a frame for recognizing the current situation as a birthday party, that is for telling it from other social events such as ordering in a restaurant. But how, I wondered, could the computer select from the thousands of frames in its memory the relevant frame for selecting the social events frame, say, for selecting the birthday party frame so, as to see the current relevance of, for example, an exchange of gifts rather than of money? It seemed to me obvious that any AI program using frames to organize millions of meaningless facts so as to retrieve the currently relevant ones was going to be caught in a regress of frames for recognizing relevant frames for recognizing relevant facts, and that, therefore, the frame problem wasn’t just a problem but was a sign that something was seriously wrong with the whole approach of seeking to select a de-situated frame to give meaning to a specific event in a specific situation. Indeed, Minsky has recently acknowledged in Wiredthat AI has been brain dead since the early 70ies when it encountered the problem of commonsense knowledge. [9]

Terry Winograd, the best of the AI graduate students back then, unlike his colleagues at M.I.T., wanted to try to figure out what had gone wrong. So in the mid 70ies Terry and I began having weekly lunches to discuss the frame problem, the commonsense knowledge problem, and other such difficulties in a philosophical context.

After a year of such conversations Winograd moved to Stanford where he abandoned work on AI and began teaching Heidegger in his Computer Science courses. In so doing, he became the first high-profile deserter from what was, indeed, becoming a degenerating research program. That is, researchers began to have to face the fact that their optimistic predictions had failed. John Haugeland refers to the AI of symbolic rules and representations of that period as Good Old Fashioned AI—GOFAI for short—and that name has been widely accepted as capturing its status as an abandoned research program.

IV. Seeming exceptions to the claim that AI based on features and rules has totally failed: The success of Deep Blue and (perhaps) Jeopardy.

But the history of computer chess, many claim, makes my criticism of AI look misguided. I wrote in 1965 in a RAND report that computers currently couldn’t play chess well enough to beat a ten-year-old beginner. The AI people twisted my report on the limitations of current AI research into a claim that computers would never play chess well enough to beat a ten-year-old. They then challenged me to a game against MacHack, at the time the best M.I.T. program, which to their delight beat me roundly.

Things stayed in a sort of stand off for about twenty years. Then, given the dead end of programming common sense and relevance, researchers redefined themselves as knowledge engineers and devoted their efforts to building expert systems in domains that were divorced from everyday commonsense. They pointed out that in domains such as spectrograph analysis rules elicited from experts had enabled the computer to perform almost as well as an expert. They then made wild predictions about how all-human expertise would soon be programmed. At the beginning of AI research, Yehoshua Bar-Hillel called this way of thinking the first-step fallacy. Every success was taken to be progress towards their goal. My brother at RAND quipped, “It's like claiming that the first monkey that climbed a tree was making progress towards flight to the moon.”

It turned out that competent people do, indeed, follow rules so the computer could be programmed to exhibit competence, but that masters don’t follow rules, so expertise was out of reach. In spite of the optimistic predictions based on early successes in simplified formal domains, there were no expert systems that could achieve expertise in the messy everyday world.

Then, to every AI programmer’s delight, an IBM program, Deep Blue, beat Garry Kasparov the world chess champion. The public, and even some AI researchers who ought to have known better, concluded that Deep Blue’s masterful performance at chess showed that it was intelligent. But in fact Deep Blue’s victory did not show that a computer running rules gotten from masters could beat the masters at their own game. What it showed was that computers had become so fast that they could win by brute force enumeration. That is, by looking at 200 million positions per second, and so looking at all possibilities as many as seven moves ahead, and then choosing the move that led to the best position, the program could beat human beings who could look at most at about 300 relevant moves in choosing a move. But only in a formal game where the computer could process millions of moves without regard for relevance could brute force win out over intelligence.