Why the future needs Bill Joy

A Response to Bill Joy’s "Why the future doesn’t need us"

Bill Joy is worried that robotics, genetic engineering, and nanotechnology will drive us to extinction. But if know ourselves, humbly work hard, dig for truth, and love each other, we might just tame the galaxy and live happily ever after. Most likely, we’ll muddle through with only a few major catastrophes.

May 20, 2000

By Tihamer Toth-Fejel ttf@rc.net

In autumn of 1998, Sun Microsystems co-founder Bill Joy became anxiously aware of the dangers facing the human race. For some of us, that awareness happened much earlier, possibly because we weren’t busy building a successful computer company. In 1978, Gerard O’Neill had just delivered me from the despair brought on by the Club of Rome report, which had predicted the demise of the human race through overpopulation, pollution, and resource depletion. Because one of the fundamental and incorrect assumptions of the "Limits to Growth" mindset was that our environment was limited and closed, I took a few days off from grad school to lobby in Washington, DC for an improved U.S. Space program. After one of the House hearings at which Princeton scientist Gerard O’Neill testified, the L-5 Society sponsored a party, to which I was lucky enough to get invited. Between the hors d'oeuvres, I ran into another engineering grad student, Eric Drexler, who I knew by his reputation on solar sails. He told me about building very small robots that would fit inside each one of our cells, making it possible for us to live for hundreds of years.

"Sure, Eric," I thought, convinced that he was just spouting science fiction. I broke off to talk to someone more grounded in reality. But every year or so I’d run into him, and he’d be spouting the same nanotech nonsense, and eventually his ideas started sounding almost reasonable.

In 1983, I was working on my EE master's thesis, Self-Test: From Simple Circuits to Self-replicating Automata, I found that an essential ingredient of self-testing is the hallmark of the "conquer and divide" strategy -- before you can test an entire system, you must test its subcomponents, then you must test it’s sub-sub-components, etc. As the parts got smaller and smaller, I found myself doing chemistry instead of electrical engineering, so I gave up and turned to the more comfortable world of self-replicating and self-modifying software. At the Houston ISDC conference that year, I complained to Eric about my problem. In his characteristic dry wit, he said, "God has pretty good quality control on atoms." His point was that mountains, rivers, and planets don't process information, metabolize matter and energy, maintain homeostasis, and self-replicate. This is because they can't adequately build, test, repair, and otherwise manipulate their smallest subcomponents. Carbon-based life forms, on the other hand, can do all those things because they test and repair themselves down to the ultimate elemental subcomponent level -- atoms.

Then Eric wrote "Engines of Creation", explaining in detail his ideas about nanotechnology, and I finally "got" it. Like for Bill Joy, it had a major impact on me, but in my case, I became seriously depressed for many months, worried about how gray goo was going to destroy all civilization -- worse than overpopulation, pollution, or nuclear war ever could.

But by then, there were a number of other nanocognizanti in Silicon Valley, some of whom were not afraid of thinking in terms of this dangerous technology. One of the first things they discovered was that designing gray goo was more difficult than first imagined. In fact, in the half century since Von Neumann described the first model for machine self-replication, nobody has been able to implement it. While some scholars have claimed that it was so simple that it wasn’t worth doing, experts at the robotics labs across the country seem to agree that it is very difficult. It is certainly difficult to design a self-replicating machine that can "go wild". At the same time, if we could design an autotrophic self-replicating machine, then it would be relatively easy to design it in such a way (logically analogous to four-stranded DNA) that mutations would be impossible.

Soon afterwards, I got a job in Artificial Intelligence, so I learned enough to criticize Drexler’s AI projections in "Engines of Creation", and he backed down from it, a little. After all, as he himself points out, predicting new engineering is relatively safe, while predicting new science is completely unpredictable. And since understanding intelligence is new science, we cannot predict it. On the other hand, building an atomically identical copy of person (a "xox") is an engineering problem, and there is no reason that such a person wouldn’t be just as human as a clone or a baby conceived via in vitro fertilization.

In the April 2000 issue of Wired (http://www.wired.com/wired/archive/8.04/joy_pr.html) "Why the future doesn’t need us", Bill Joy wrote (and I include his complete text):

From the moment I became involved in the creation of new technologies, their ethical dimensions have concerned me, but it was only in the autumn of 1998 that I became anxiously aware of how great are the dangers facing us in the 21st century. I can date the onset of my unease to the day I met Ray Kurzweil, the deservedly famous inventor of the first reading machine for the blind and many other amazing things.

Ray and I were both speakers at George Gilder's Telecosm conference, and I encountered him by chance in the bar of the hotel after both our sessions were over. I was sitting with John Searle, a Berkeley philosopher who studies consciousness. While we were talking, Ray approached and a conversation began, the subject of which haunts me to this day.

I had missed Ray's talk and the subsequent panel that Ray and John had been on, and they now picked right up where they'd left off, with Ray saying that the rate of improvement of technology was going to accelerate and that we were going to become robots or fuse with robots or something like that, and John countering that this couldn't happen, because the robots couldn't be conscious.

While I had heard such talk before, I had always felt sentient robots were in the realm of science fiction. But now, from someone I respected, I was hearing a strong argument that they were a near-term possibility. I was taken aback, especially given Ray's proven ability to imagine and create the future. I already knew that new technologies like genetic engineering and nanotechnology were giving us the power to remake the world, but a realistic and imminent scenario for intelligent robots surprised me.

If it’s any consolation, Ray is now worried about Moore’s Law too. Is John? Probably not. Ray is a brilliant man, and a technological genius. But his understanding of metaphysics is weak, and he doesn’t understand the difference between an accidental characteristic and an essence. On the other hand, from some of the stuff that John says, I doubt that he has ever been involved any advanced software development. But he needs to be, to connect his philosophical knowledge to technological reality.

It's easy to get jaded about such breakthroughs. We hear in the news almost every day of some kind of technological or scientific advance. Yet this was no ordinary prediction. In the hotel bar, Ray gave me a partial preprint of his then-forthcoming book The Age of Spiritual Machines, which outlined a utopia he foresaw - one in which humans gained near immortality by becoming one with robotic technology. On reading it, my sense of unease only intensified; I felt sure he had to be understating the dangers, understating the probability of a bad outcome along this path.

I found myself most troubled by a passage detailing a dystopian scenario:

THE NEW LUDDITE CHALLENGE

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

All things? Only if we can precisely define what we do – which we can’t. All work? Define work. What is real work? What does Pope John Paul II mean when he talks about us being co-creators with God? Why do we work in the first place?

Unfortunately, not only is Kaczynski (who Bill starts quoting here) inventing new science and new metaphysics, he is making a nebulous and highly questionable premise, so any conclusions he draws are very suspect. He also is making a false either/or dichotomy. With his mathematical background, Kaczynski logically followed through on his premises, and became the Unabomber. The thing that scares me is that Joy accepts Kaczynski’s premises!

If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines.

Somebody has been watching too many Frankenstein and Terminator movies. Not that I don’t enjoy them myself. Terminator II is one of the best movies ever made. And it has a lot to say about our future, too (more on that later).

It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones.

If we learn enough to understand what intelligence really is -- i.e. if we understand enough to make machines intelligent -- then we will learn enough to make ourselves more intelligent too.

Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

Why is it so certain that humans will become dependent on their machines? Won’t humans have a choice about their dependence? Of course we have a choice – if we don’t suffer from greed, laziness, or pride. What about the Amish? What do we really need? Do we really need anything other than truth and love?

On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite - just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses;

Yeah, just like the Internet gives the elite more control over the masses. How do these technical elite get where they are? They study and work hard, and they have a firm grasp on at least one sufficiently large slice of reality (unlike Kaczinsky).

and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system.

Not only does the false premise lead to a questionable conclusion, but in this case a murderous one. Does the phrase "useless mouths" sound familiar? It should, if you are familiar with German history in the last century. But every human being is intrinsically worth more than the entire physical universe. Kaczinsky’s unstated assumption measures human beings as if they were "human doings", and is incorrect. It is not just wrong, but it is a fatal error to degrade human persons into simply the work that they can produce. This is the same fundamental mistake that the Nazis and Communists made, and in fact that capitalists also make. It is very similar to the one made by Merian’s Friends and NARAL, in which a person’s value is decided by whether or not they are wanted by someone with power over them.

If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite.

Euthaniasia is being sold as "humane", as is partial birth abortion. Both of them measure a living human being’s worth by his or her usefulness or desirability. The Nazis killed for the "good" of the fatherland. Is there much of a difference? Like Kaczinsky, Joy makes the error that humans are measured by what they do, instead of what they are. That is why he is so distraught by Kaczinsky’s relentless logic – he has no defense. Then again, Joy has done a lot with his life, and if I would venture a little psychoanalysis based on nothing else than his vulnerability to Kaczinsky’s argument, I’d say that Bill gets much of his self-worth from his considerable accomplishments, instead of from the simple fact that he exists. And it is his existence, pure and simple, without all his contributions to the good of the world, which makes him intrinsically valuable.

Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone's physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes "treatment" to cure his "problem."

It is true that nanotechnology could theoretically satisfy all our physical needs, but that is relatively true even under today’s technology. In fact, the quality of life has improved worldwide for the past century or two (since the industrial revolution). But Kaczinsky’s blather needs to be closely questioned. How will all children be raised under "psychologically hygienic conditions" when everybody screws up every day? What a deluded dreamer! Besides, who defines which hobbies are wholesome and which are not? How does one "treat" a yearning to be loved? How do you "fix" curiosity? Kaczinsky’s viewpoint reflects that of well-known newspaper psychologist Doris Wild Helmering. In her column (September, 1997, Scripps Howard News Service) one of her otherwise successful clients asked: "Is that all there is to life?" Helmering instinctively realized that her client was asking a vitally important question about the purpose of his life, a topic important enough to devote a column, but she told him to go find a hobby. Good grief! What’s wrong with our society that otherwise brilliant psychologists, engineers, and other professionals make the same fundamental error that an insane killer makes about what it means to be human?

The people who ask the question "Is that all there is?" have succeeded pretty well in life, but they still feel incomplete -- a feeling that comes from not fulfilling their true purpose in life. So when someone asks, "Is that all there is?", then the correct response is, "What is your purpose in life, and how are you failing to accomplish it?"

A normal response to this second question might be something like "I want to leave the world better off than when I found it", and I am certain that Bill Joy has already achieved that goal – to a level that few people can match. On the other hand, it is a sentiment that is admirable but vague, because as Joy admits, there are secondary effects that might not be so positive. It might be better to admit ignorance. In this case, a better response to the question might be another question: "How am I supposed to determine the purpose of my life?"

The thing is, we cannot decide the purpose of our lives. Otherwise, we could just make up our minds that our purpose is to make the amount of money we make, do the job we do, and love the people who love us, and then we’d live happily for all our days. But that doesn't work. If it did, all we would need to do is set our sights low enough and then, presto! -- we get our satisfaction. But we’ve already tried that, and that is why our spirit still hungers. But if we don't determine our purpose in life, who does?

That is an interesting question that the public schools refuse to touch. But as one young hedonist wrote many years ago, "Our hearts are restless until they rest in you, my God." Helmering said that her client goes to church "almost every Sunday." So what? Lots of people go to church out of habit, guilt, or a host of other stupid reasons. The question Helmering should have asked her client is "What does God want with your life?" This is no idle Judeo-Christian-Islamic platitude -- the yearning for truth, for love, and for meaning, is too strong. On the other hand, the problem is that while churches, synagogues, and mosques may say what God wants from everyone (or at least claim what God wants, and sometimes with remarkable agreement amongst themselves), they don't say anything specific about Helmering's client, Ted Kaczinsky, Bill Joy, you, or me.

There is one concrete recommendation I can make -- ask yourself, "How has God made you special and unique?" It may take three or four attempts at writing it down before you’ll connect, but once you do, you will know. You may still be fuzzy about some of the details, but at least you’ll know which way you’re headed, and you won’t ask "Is that all there is?" What should be interesting is what happens when a very smart machine asks itself the same question.

Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them "sublimate" their drive for power into some harmless hobby.

Good grief! We have an entire galaxy to homestead, huge gaps in all areas of human knowledge, and billions of people to love -- and Kaczinsky thinks that we have nothing to do? What an idiot!

These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.1

Kaczinsky seems to infer that freedom is better than artificial joy. This is true, but is a drug addict a domestic animal? And who confers this status? Whatever happened to inalienable rights? Do you think that just because the Declaration of Independence was written in America, that it only applies to Americans? A self-evident truth is true everywhere and always. Because we are human, we have an inalienable right to pursue happiness, though no guarantee of reaching it. We have human status instead of the status of domestic animals, not because others may find us useful, but because we exist.

In the book, you don't discover until you turn the page that the author of this passage is Theodore Kaczynski - the Unabomber. I am no apologist for Kaczynski.

Yes, but Bill Joy accepted Kaczynski’s unprovable and incorrect assumptions about what it means to be human! And like Kaczynski, he completely ignored the essential truth of the Declaration of Independence.

His bombs killed three people during a 17-year terror campaign and wounded many others. One of his bombs gravely injured my friend David Gelernter, one of the most brilliant and visionary computer scientists of our time. Like many of my colleagues, I felt that I could easily have been the Unabomber's next target.

Kaczynski's actions were murderous and, in my view, criminally insane. He is clearly a Luddite, but simply saying this does not dismiss his argument; as difficult as it is for me to acknowledge, I saw some merit in the reasoning in this single passage. I felt compelled to confront it.

Of course Bill Joy had problems confronting it. That’s OK. It needs to be confronted and rejected for good reason. But if one starts by accepting his assumptions about what it means to be human, then logic dictates that he or she will end up in the same place.

The cause of many such surprises seems clear: The systems involved are complex, involving interaction among and feedback between many parts. Any changes to such a system will cascade in ways that are difficult to predict; this is especially true when human actions are involved.

Very true. So? One of the engineering characteristics that come out of being made in the "image and likeness of God" (whatever the heck that means exactly) is that human beings have an infinite capacity to surprise. I suspect this is an enhancement to the Turing test (albeit infinity is rather difficult to test, and surprise is rather difficult to quantify).

I started showing friends the Kaczynski quote from The Age of Spiritual Machines; I would hand them Kurzweil's book, let them read the quote, and then watch their reaction as they discovered who had written it. At around the same time, I found Hans Moravec's book Robot: Mere Machine to Transcendent Mind. Moravec is one of the leaders in robotics research, and was a founder of the world's largest robotics research program, at Carnegie Mellon University. Robot gave me more material to try out on my friends - material surprisingly supportive of Kaczynski's argument. For example:

The Short Run (Early 2000s)

Biological species almost never survive encounters with superior competitors. Ten million years ago, South and North America were separated by a sunken Panama isthmus. South America, like Australia today, was populated by marsupial mammals, including pouched equivalents of rats, deer, and tigers. When the isthmus connecting North and South America rose, it took only a few thousand years for the northern placental species, with slightly more effective metabolisms and reproductive and nervous systems, to displace and eliminate almost all the southern marsupials.

In a completely free marketplace, superior robots would surely affect humans as North American placentals affected South American marsupials (and as humans have affected countless species). Robotic industries would compete vigorously among themselves for matter, energy, and space, incidentally driving their price beyond human reach. Unable to afford the necessities of life, biological humans would be squeezed out of existence.

Among other things, Moravec is falling for the same fallacy of the excluded middle that Kaczinsky makes. Why is it either/or? If we really understand intelligence well enough to endow machines with it, isn’t it reasonable that we will incorporate that same understanding in improving our own intelligence? Besides, on what basis can anyone say that humans are tied to DNA and wet biochemistry? What does it really mean to be human? To have ten fingers and ten toes? What about the concepts promoted by the Extropian and Transhumanist communities:

1. Perpetual Progress — Seeking more intelligence, wisdom, and effectiveness, an indefinite lifespan, and the removal of political, cultural, biological, and psychological limits to self-actualization and self-realization. Perpetually overcoming constraints on our progress and possibilities. Expanding into the universe and advancing without end.

2. Self-Transformation — Affirming continual moral, intellectual, and physical self-improvement, through critical and creative thinking, personal responsibility, and experimentation. Seeking biological and neurological augmentation along with emotional and psychological refinement.

3. Practical Optimism — Fueling action with positive expectations. Adopting a rational, action-based optimism, in place of both blind faith and stagnant pessimism.

4. Intelligent Technology — Applying science and technology creatively to transcend "natural" limits imposed by our biological heritage, culture, and environment. Seeing technology not as an end in itself but as an effective means towards the improvement of life.

5. Open Society — Supporting social orders that foster freedom of speech, freedom of action, and experimentation. Opposing authoritarian social control and favoring the rule of law and decentralization of power. Preferring bargaining over battling, and exchange over compulsion. Openness to improvement rather than a static utopia.

6. Self-Direction — Seeking independent thinking, individual freedom, personal responsibility, self-direction, self-esteem, and respect for others.

7. Rational Thinking — Favoring reason over blind faith and questioning over dogma. Remaining open to challenges to our beliefs and practices in pursuit of perpetual improvement. Welcoming criticism of our existing beliefs while being open to new ideas.

The Extropian principles above have their own shortcomings and self-contradictions, but they certainly challenge Bill Joy’s and Ted Kaczinzky’s dismal prognosis of the human race.

There is probably some breathing room, because we do not live in a completely free marketplace. Government coerces nonmarket behavior, especially by collecting taxes. Judiciously applied, governmental coercion could support human populations in high style on the fruits of robot labor, perhaps for a long while.

A long time is about 18 months, if Moore’s law applies and if Moravec’s other assumptions are true.

A textbook dystopia - and Moravec is just getting wound up. He goes on to discuss how our main job in the 21st century will be "ensuring continued cooperation from the robot industries" by passing laws decreeing that they be "nice,"3

Moravec makes a classic mistake regarding "nice". He assumes that jurisprudic law determines right and wrong. The hate that lawyers have earned through the ages, when they are technically legal but flagrantly immoral, proves the fallacy of that idea. Being "nice" is not a function of federal law, but emerges from a much deeper level of non-anthropomorphized existence. Robert Axlerod proved that the importance of being "nice" is part of the underlying mathematical structure of the universe (albeit there is still a lot of work to be done to show how it applies in our everyday situations). In addition, Moravec assumes that the law is only a tool to protect the leisure-classed incumbents from the hard-working immigrants. I suppose that in our relativistic society, the law is just another force that grows from the barrel of a gun, but was Chairman Mao really correct? It certainly isn’t true in science, and it isn’t even true in computer science, despite all the magnificent imaginary worlds it allows us to build. Ethics has a mathematical foundation that cannot be divorced from reality. If we ever build sentient robotic persons, and if they are truly intelligent, then they will be able to re-derive the Ten Commandments (which really isn’t that difficult), including #4 (Honor your parents -- mostly Bill Joy and his compatriots, in this case), and #5 (Thou shall not kill other persons, biologically based or not). Of course, we set a bad example…

and to describe how seriously dangerous a human can be "once transformed into an unbounded superintelligent robot." Moravec's view is that the robots will eventually succeed us - that humans clearly face extinction.

Wait a minute. How can humans face extinction if we become superintelligent? And to whom will a super-intelligent human be dangerous to? Especially if every human being has access to augmentation, and only some (like the Amish) will refuse. Besides, the concept of "superintelligent" is itself suspicious because it would involve new science, not just engineering.

I decided it was time to talk to my friend Danny Hillis. Danny became famous as the cofounder of Thinking Machines Corporation, which built a very powerful parallel supercomputer. Despite my current job title of Chief Scientist at Sun Microsystems, I am more a computer architect than a scientist, and I respect Danny's knowledge of the information and physical sciences more than that of any other single person I know. Danny is also a highly regarded futurist who thinks long-term - four years ago he started the Long Now Foundation, which is building a clock designed to last 10,000 years, in an attempt to draw attention to the pitifully short attention span of our society. (See "Test of Time," Wired 8.03, page 78.)

So I flew to Los Angeles for the express purpose of having dinner with Danny and his wife, Pati. I went through my now-familiar routine, trotting out the ideas and passages that I found so disturbing. Danny's answer - directed specifically at Kurzweil's scenario of humans merging with robots - came swiftly, and quite surprised me. He said, simply, that the changes would come gradually, and that we would get used to them.

Gradually? Well, if someone has been prepared by reading lots of science fiction and working in high-tech fields, and if by gradually you mean within months instead of seconds, I suppose Hill is right. But most people are going to freak when they find out what we are facing.

But I guess I wasn't totally surprised. I had seen a quote from Danny in Kurzweil's book in which he said, "I'm as fond of my body as anyone, but if I can be 200 with a body of silicon, I'll take it." It seemed that he was at peace with this process and its attendant risks, while I was not.

Life is full of risks, and then you die. Is there any other reasonable attitude to take? Personally, I want my body to be made of a diamonoid-titanium matrix passivated by silicates (after all, such the high surface area of diamond would constitute a fuel-air explosive) so that I can take 30 G’s without a sweat, and live for thousands of years. I’ve got a lot of working, playing, living, learning, and loving to do. I also want to increase my intelligence, though at this point we would need new scientific understanding to do it without risk – I’m not too interested in getting a brain transplant, though I'll certainly add peripherals.

I'm surprised that Bill did not explain why Danny was so passive about it. What does Danny believe that Bill Joy does not? In an interview with Omni, we get a clue:

Omni: Isn't it dangerous to set in motion a self-improving intelligence whose workings we can't understand?

Hillis: We have children and don't know what they're going to grow up, [but we] have faith we can influence them.

Omni: Human children aren't self-teaching machines without upper limits. They're genetic blends of their parents.

Hillis: Right, yet serial murderers are also genetic blends of two people who weren't, in general, serial murderers. There is a danger in building something that learns and acts of its own, but if good qualities in it, it has the same potential as a child we raise with care. (http://www.student.nada.kth.se/~d95-aeh/hillis.html )

While talking and thinking about Kurzweil, Kaczynski, and Moravec, I suddenly remembered a novel I had read almost 20 years ago - The White Plague, by Frank Herbert - in which a molecular biologist is driven insane by the senseless murder of his family. To seek revenge he constructs and disseminates a new and highly contagious plague that kills widely but selectively. (We're lucky Kaczynski was a mathematician, not a molecular biologist.)

Biologists who read "The White Plague" were scared out of their wits because they knew how accurately Frank White was writing. Frankly, I’m surprised that Kaczinsky didn’t bother to learn the biology and try it. Of course, when you’re trying that sort of stuff out, you only get to make one bacteria-sized mistake and you’re dead. At least with explosives, you get to test matchhead-sized samples safely, though the Weathermen terrorists were sufficiently disconnected from reality to ignore safety precautions, and they blew themselves up. If you really want to get scared, read Biohazard, the real life example from the man who ran the Soviet bioweapons program before he defected.

I was also reminded of the Borg of Star Trek, a hive of partly biological, partly robotic creatures with a strong destructive streak. Borg-like disasters are a staple of science fiction, so why hadn't I been more concerned about such robotic dystopias earlier? Why weren't other people more concerned about these nightmarish scenarios?

Because deep down, they know: "We will be assimilated. Resistance is futile! Become one with the machine." J Seriously, people ignore the Borg because it’s science fiction, and it’s a staple because it thrives on emotion, not on reality. People with destructive streaks are also self-destructive, and it would be the same with any sentient species. The Borg are fictional specifically because of evolution in action. So don’t worry. I mean, yes, worry enough to think about it while designing systems, but don’t give yourself a heart attack.

Part of the answer certainly lies in our attitude toward the new - in our bias toward instant familiarity and unquestioning acceptance. Accustomed to living with almost routine scientific breakthroughs, we have yet to come to terms with the fact that the most compelling 21st-century technologies - robotics, genetic engineering, and nanotechnology - pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once - but one bot can become many, and quickly get out of control.

Bill, you’re assuming that building a generalized autotrophic self-replicating nanobot is easy. It is not. Heck, in the half century since John Von Neumann’s seminal work on the subject, we still don’t even know if machine self-replication is easy or difficult. As George Friedman, Research Director of the Space Studies Institute put it:

"I can't repeat the many times I've been reassured that self-replication was easy. After all, John Von Neumann, way back in the 40's, clearly defined the logic of self-replication and all we have to do is implement his "blueprint". So if self-replication is so easy, where are all the Self-Replicating Systems?… While still at JPL, Jim Burke tried repeatedly and unsuccessfully to interest the scientific community at Cal Tech and JPL in establishing a prize for the first autonomous self-replicating system. At meetings I attended with the head of JPL's Automation and Robotics activity and the director of USC's robotics laboratory, I was told that -- for the scientific and exploration missions they were addressing -- self-replication was too difficult, required too long a research schedule, and was not necessary anyway."

Interestingly enough, people at the MIT robotics lab have talked about self-replication, but no one has investigated the problem because they think it’s too difficult. The 1980 NASA Summer Study, the only large effort to date that has tried to design a practical Self-Replicating System (SRS), indicated that the technology was indeed very powerful, and that it might be easy, but the researchers were not able to conclusively prove the latter. Extensive literature searches, interviews with participants of that study, and conversations with many experts in related fields have found that almost no further work on kinematic SRS (the real kind, as opposed to cellular automata simulation) has been done since then. On the other hand, Carnegie Mellon might be starting up an effort soon.

Engineered micro-organisms, on the other hand… Well, that is a problem. Actually, we do have a deep, though temporary, vulnerability with respect to that technology.

Much of my work over the past 25 years has been on computer networking, where the sending and receiving of messages creates the opportunity for out-of-control replication. But while replication in a computer or a computer network can be a nuisance, at worst it disables a machine or takes down a network or network service. Uncontrolled self-replication in these newer technologies runs a much greater risk: a risk of substantial damage in the physical world.

We should worry only if the premise is true. It might be. Then again, depending how much flexibility and power you want to give up, it is relatively easy to design self-replicating robots so that they can only replicate under certain conditions. Recall the email hoaxes that claimed that reading a certain email message would completely erase all the documents on your hard drive, which is impossible. On the other hand, it’s very easy to execute an attached executable that could erase your hard drive. How many newbies really understand the difference? Bill Joy is very familiar with the Java security features, which he helped design. I think he knows that the problem isn’t accidents – it’s malicious intent. As far as self-replicating robots are concerned, work in Japan that uses robots to build other robots hasn’t even significantly reduced the price of robots, much less allowed them to take over the world.

Each of these technologies also offers untold promise: The vision of near immortality that Kurzweil sees in his robot dreams drives us forward; genetic engineering may soon provide treatments, if not outright cures, for most diseases; and nanotechnology and nanomedicine can address yet more ills. Together they could significantly extend our average life span and improve the quality of our lives. Yet, with each of these technologies, a sequence of small, individually sensible advances leads to an accumulation of great power and, concomitantly, great danger.

Very true. But if the angel of death were at your door, waiting for you to die of disease or old age. What would you do for one more pain-free breath? What would you do for another beautiful morning? What would you do for another hike to the top of Half Dome? What would you do for one more night of passion with the one you love? Are you willing to give up all that on the off chance that your death, and everyone else’s, might slightly reduce the chance of a technological nightmare? Remember, nobody gets out of this universe alive, and all species eventually become extinct. We’re all going to die anyway, nanotech or no nanotech. The trick is making our lives worthwhile.

What was different in the 20th century? Certainly, the technologies underlying the weapons of mass destruction (WMD) - nuclear, biological, and chemical (NBC) - were powerful, and the weapons an enormous threat. But building nuclear weapons required, at least for a time, access to both rare - indeed, effectively unavailable - raw materials and highly protected information; biological and chemical weapons programs also tended to require large-scale activities.

Unfortunately, this will change. Anyone with some intelligence and the willingness to do some hard work will be able to design a molecular sieve with which to pull fissile elements out of seawater. With nanotechnology, separating the isotopes isn’t that difficult either. After you’ve done all that, designing the shaped charges is no big deal. In some neighborhoods, it could give new meaning to the phrase "Keeping up with the Joneses".

On the other hand, depending on how quickly certain technologies are developed -- while a nuclear bomb in your neighborhood would ruin your day -- you can enhance your body sufficiently so that you would be relatively invulnerable to one going off a few dozen miles away. In either case, this type of personal nuclear capability would provide an incentive to know your neighbors a little better, and to provide them with the loving support they need so that they don’t do something foolish.

Finally, most people would rather enjoy the benefits of the nanotech toys rather than work hard towards destructive ends. Everyone needs to be loved. Fulfill that need, provide the guidance, and most of the Kaczinskys of the world would be healed, while the rest would at least be identified. Of course, we’ll all have to give up some privacy to nosy neighbors who have a vested interest in not having a nuclear power next door, but wouldn’t you rather keep your freedom, security, and wealth?

 

The 21st-century technologies - genetics, nanotechnology, and robotics (GNR) - are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.

Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication.

I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.

Power is extreme evil? It is true, as Lord Acton pointed out, that power tends to corrupt. But it’s not the power itself that does it – it is the difference in power that tends to corrupt. This is why the founding fathers of the United States instituted the division of powers in our Constitution, and why they included the first and second amendments. They understood the seductive pull of power. Bill Joy needs a better understanding of the nature of evil. In The Road Less Traveled, Scott Peck defined evil as: "The attempted control of another person (through direct or indirect means) in order to prevent one's own spiritual growth." In People of the Lie, he later refined this definition to: "A malignant narcissism that has an inordinate fear of imperfection that causes it to violently reject any criticism and/or reinterpret it in a way to preserve one's internal status quo." More specifically, he describes a "narcissistic personality disorder" that can be distinguished by:

 1.        An abrogation of responsibility

2.        Consistent, destructive, scapegoating behavior (sometimes quite subtle)

3.        Excessive, though usually covert, intolerance to criticism

4.        Excessive concern with a public and self image of respectability

5.        Intellectual deviousness

6.        Extraordinary willfulness

7.        Denial of hateful feelings or vengeful motives.

 

Peck also points to a few characteristics of evil:

1.        Pride overcomes intelligence

2.        Has no understanding of love (thinks that love is a "trick")

3.        The notion of sacrifice is totally foreign

4.        Finds incomprehensible the concept of objective truths such as science (other than means to an end)

5.        Is deceptive and self-deceptive

6.        Is shallow and ugly

7.        Confuses all those encountering it

8.        Is powerful.

Once we understand the relationship between evil and power, we will be better equipped to deal with it.

Nothing about the way I got involved with computers suggested to me that I was going to be facing these kinds of issues.

Surprise! As John Lennon said, "Life is what happens to you while you are making other plans." J

My life has been driven by a deep need to ask questions and find answers. When I was 3, I was already reading, so my father took me to the elementary school, where I sat on the principal's lap and read him a story. I started school early, later skipped a grade, and escaped into books - I was incredibly motivated to learn. I asked lots of questions, often driving adults to distraction.

Bill Joy should read Orson Scott Card’s "Ender’s Game" and "Ender’s Shadow." He would love them.

As a teenager I was very interested in science and technology. I wanted to be a ham radio operator but didn't have the money to buy the equipment. Ham radio was the Internet of its time: very addictive, and quite solitary. Money issues aside, my mother put her foot down - I was not to be a ham; I was antisocial enough already.

I may not have had many close friends, but I was awash in ideas. By high school, I had discovered the great science fiction writers. I remember especially Heinlein's Have Spacesuit Will Travel and Asimov's I, Robot, with its Three Laws of Robotics. I was enchanted by the descriptions of space travel, and wanted to have a telescope to look at the stars; since I had no money to buy or make one, I checked books on telescope-making out of the library and read about making them instead. I soared in my imagination.

It’s strange, but so many people with whom I hang out with share the same story… it would have been nice to connect sooner…

Thursday nights my parents went bowling, and we kids stayed home alone. It was the night of Gene Roddenberry's original Star Trek, and the program made a big impression on me. I came to accept its notion that humans had a future in space, Western-style, with big heroes and adventures. Roddenberry's vision of the centuries to come was one with strong moral values, embodied in codes like the Prime Directive: to not interfere in the development of less technologically advanced civilizations. This had an incredible appeal to me; ethical humans, not robots, dominated this future, and I took Roddenberry's dream as part of my own.

I excelled in mathematics in high school, and when I went to the University of Michigan as an undergraduate engineering student I took the advanced curriculum of the mathematics majors. Solving math problems was an exciting challenge, but when I discovered computers I found something much more interesting: a machine into which you could put a program that attempted to solve a problem, after which the machine quickly checked the solution. The computer had a clear notion of correct and incorrect, true and false. Were my ideas correct? The machine could tell me. This was very seductive.

Our need to know the truth is extremely strong – it is part of our nature as human beings – and that is why close imitations to truth are so seductive.

I was lucky enough to get a job programming early supercomputers and discovered the amazing power of large machines to numerically simulate advanced designs. When I went to graduate school at UC Berkeley in the mid-1970s, I started staying up late, often all night, inventing new worlds inside the machines. Solving problems. Writing the code that argued so strongly to be written.

In The Agony and the Ecstasy, Irving Stone's biographical novel of Michelangelo, Stone described vividly how Michelangelo released the statues from the stone, "breaking the marble spell," carving from the images in his mind.4 In my most ecstatic moments, the software in the computer emerged in the same way. Once I had imagined it in my mind I felt that it was already there in the machine, waiting to be released. Staying up all night seemed a small price to pay to free it - to give the ideas concrete form.

Yes! Engineering is art. Real, functional art. Not the imitations of reality funded by the NEA. What is sad is that so many people do not understand the feeling of creation, and have never experienced it. Poor souls.

After a few years at Berkeley I started to send out some of the software I had written - an instructional Pascal system, Unix utilities, and a text editor called vi (which is still, to my surprise, widely used more than 20 years later) - to others who had similar small PDP-11 and VAX minicomputers. These adventures in software eventually turned into the Berkeley version of the Unix operating system, which became a personal "success disaster" - so many people wanted it that I never finished my PhD. Instead I got a job working for Darpa putting Berkeley Unix on the Internet and fixing it to be reliable and to run large research applications well. This was all great fun and very rewarding. And, frankly, I saw no robots here, or anywhere near.

Still, by the early 1980s, I was drowning. The Unix releases were very successful, and my little project of one soon had money and some staff, but the problem at Berkeley was always office space rather than money - there wasn't room for the help the project needed, so when the other founders of Sun Microsystems showed up I jumped at the chance to join them. At Sun, the long hours continued into the early days of workstations and personal computers, and I have enjoyed participating in the creation of advanced microprocessor technologies and Internet technologies such as Java and Jini.

From all this, I trust it is clear that I am not a Luddite. I have always, rather, had a strong belief in the value of the scientific search for truth and in the ability of great engineering to bring material progress.

Hmm. Only the scientific search for truth? Only material progress? Perhaps, Bill Joy needs to expand his interests into less certain and more mysterious areas that are no less true.

The Industrial Revolution has immeasurably improved everyone's life over the last couple hundred years, and I always expected my career to involve the building of worthwhile solutions to real problems, one problem at a time.

I have not been disappointed. My work has had more impact than I had ever hoped for and has been more widely used than I could have reasonably expected. I have spent the last 20 years still trying to figure out how to make computers as reliable as I want them to be (they are not nearly there yet) and how to make them simple to use (a goal that has met with even less relative success). Despite some progress, the problems that remain seem even more daunting.

But while I was aware of the moral dilemmas surrounding technology's consequences in fields like weapons research, I did not expect that I would confront such issues in my own field, or at least not so soon.

Surprise! Reality raises her ugly head. But as Socrates said, "the unexamined life is not worth living." Bill Joy is very lucky because his passion brought him face to face with himself while he was still relatively young. He was so busy working on technology that he never learned the underpinnings of morality and ethics. If Josh Hall’s hypothesis on ethics is correct, most people just pick up what they need by osmosis. If it’s any consolation, most of us computer jocks don’t bother learning the essentials of being human until we’re forced to. My social simulator ran for weeks at a time without crashing… J

Perhaps it is always hard to see the bigger impact while you are in the vortex of a change. Failing to understand the consequences of our inventions while we are in the rapture of discovery and innovation seems to be a common fault of scientists and technologists; we have long been driven by the overarching desire to know that is the nature of science's quest, not stopping to notice that the progress to newer and more powerful technologies can take on a life of its own.

I have long realized that the big advances in information technology come not from the work of computer scientists, computer architects, or electrical engineers, but from that of physical scientists. The physicists Stephen Wolfram and Brosl Hasslacher introduced me, in the early 1980s, to chaos theory and nonlinear systems. In the 1990s, I learned about complex systems from conversations with Danny Hillis, the biologist Stuart Kauffman, the Nobel-laureate physicist Murray Gell-Mann, and others. Most recently, Hasslacher and the electrical engineer and device physicist Mark Reed have been giving me insight into the incredible possibilities of molecular electronics.

In my own work, as codesigner of three microprocessor architectures - SPARC, picoJava, and MAJC - and as the designer of several implementations thereof, I've been afforded a deep and firsthand acquaintance with Moore's law. For decades, Moore's law has correctly predicted the exponential rate of improvement of semiconductor technology. Until last year I believed that the rate of advances predicted by Moore's law might continue only until roughly 2010, when some physical limits would begin to be reached. It was not obvious to me that a new technology would arrive in time to keep performance advancing smoothly.

But because of the recent rapid and radical progress in molecular electronics - where individual atoms and molecules replace lithographically drawn transistors - and related nanoscale technologies, we should be able to meet or exceed the Moore's law rate of progress for another 30 years. By 2030, we are likely to be able to build machines, in quantity, a million times as powerful as the personal computers of today - sufficient to implement the dreams of Kurzweil and Moravec.

In the engineering sense, probably. In the scientific sense? We cannot predict. What is intelligence? What is consciousness? We cannot program anything for which we do not have an algorithm, and we can’t program anything that we don’t understand. Perhaps there will be new science some day, or new metaphysics, that will enable us to do such things, but we cannot make predictions about that.

As this enormous computing power is combined with the manipulative advances of the physical sciences and the new, deep understandings in genetics, enormous transformative power is being unleashed. These combinations open up the opportunity to completely redesign the world, for better or worse: The replicating and evolving processes that have been confined to the natural world are about to become realms of human endeavor.

Yep!

In designing software and microprocessors, I have never had the feeling that I was designing an intelligent machine. The software and hardware is so fragile and the capabilities of the machine to "think" so clearly absent that, even as a possibility, this has always seemed very far in the future.

But now, with the prospect of human-level computing power in about 30 years, a new idea suggests itself: that I may be working to create tools which will enable the construction of the technology that may replace our species. How do I feel about this? Very uncomfortable. Having struggled my entire career to build reliable software systems, it seems to me more than likely that this future will not work out as well as some people may imagine. My personal experience suggests we tend to overestimate our design abilities.

Given the incredible power of these new technologies, shouldn't we be asking how we can best coexist with them? And if our own extinction is a likely, or even possible, outcome of our technological development, shouldn't we proceed with great caution?

Certainly, and we should start with Socrates advice: "Know thyself". What does it mean to be human?

The dream of robotics is, first, that intelligent machines can do our work for us, allowing us lives of leisure, restoring us to Eden. Yet in his history of such ideas, Darwin Among the Machines, George Dyson warns: "In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines." As we have seen, Moravec agrees, believing we may well not survive the encounter with the superior robot species.

How soon could such an intelligent robot be built? The coming advances in computing power seem to make it possible by 2030. And once an intelligent robot exists, it is only a small step to a robot species - to an intelligent robot that can make evolved copies of itself.

A second dream of robotics is that we will gradually replace ourselves with our robotic technology, achieving near immortality by downloading our consciousnesses; it is this process that Danny Hillis thinks we will gradually get used to and that Ray Kurzweil elegantly details in The Age of Spiritual Machines. (We are beginning to see intimations of this in the implantation of computer devices into the human body, as illustrated on the cover of Wired 8.02.)

That’s upload (into a more powerful machine), not download (into a less capable one). But what happens to the old brain? Call me Amish, call me old-fashioned, but if someone copies my brain into silicon, what happens to the old one? Uploading is not an engineering problem, which can be solved with time and money. It is not even a scientific one, which is less amenable to prediction. It is a metaphysical problem.

But if we are downloaded into our technology, what are the chances that we will thereafter be ourselves or even human? It seems to me far more likely that a robotic existence would not be like a human one in any sense that we understand, that the robots would in no sense be our children, that on this path our humanity may well be lost.

What? Are we human because of carbon and DNA? Does Bill Joy believe in that vitalism stuff? Are we humans because we have ten fingers, or because we thrill in wonderment at the beautiful elegance of mathematics? Are we human because we have arms, or because we embrace each other in love? Are we human because we have two eyes, or because we recognize the truth when we see it? How could we not have any sense of how robotic persons would be human, since we are building them in our own image?

Genetic engineering promises to revolutionize agriculture by increasing crop yields while reducing the use of pesticides; to create tens of thousands of novel species of bacteria, plants, viruses, and animals; to replace reproduction, or supplement it, with cloning; to create cures for many diseases, increasing our life span and our quality of life; and much, much more. We now know with certainty that these profound changes in the biological sciences are imminent and will challenge all our notions of what life is.

Only if you have quaint notions of what life is. I thought that Bill Joy read science fiction. Life is any system with boundaries and subcomponents that processes energy, materials, and information in order to maintain homeostasis and self-replicate. On the other hand, these profound changes will challenge our ideas of our purpose in life. But then again, any 9 to 5 job plus spouse and children will do that almost as well, if you pay close attention.

Technologies such as human cloning have in particular raised our awareness of the profound ethical and moral issues we face. If, for example, we were to reengineer ourselves into several separate and unequal species using the power of genetic engineering, then we would threaten the notion of equality that is the very cornerstone of our democracy.

Bill Joy has apparently not programmed in Lisp, where there are many forms of "equal". Human beings are definitely not equal in abilities. But we are equal in our essence of humanness. Technologies will change the abilities of human beings – there is no doubt of that. But will it change their inherent dignity as human beings? Does Bill Joy, with all his intelligence and technical abilities, have more a right to vote than some poor black woman working the counter at McDonalds? Of course not. Does he have different capabilities? Of course. Does he have intrinsically different worth? No.

Given the incredible power of genetic engineering, it's no surprise that there are significant safety issues in its use. My friend Amory Lovins recently cowrote, along with Hunter Lovins, an editorial that provides an ecological view of some of these dangers. Among their concerns: that "the new botany aligns the development of plants with their economic, not evolutionary, success." (See "A Tale of Two Botanies," page 247.)

Is there really much difference between physical energy and money? As Michael Rothschild showed in Bionomics, there really isn’t. Both of them provide an ordering mechanism for a very complex ecosystem.

Amory's long career has been focused on energy and resource efficiency by taking a whole-system view of human-made systems; such a whole-system view often finds simple, smart solutions to otherwise seemingly difficult problems, and is usefully applied here as well.

After reading the Lovins' editorial, I saw an op-ed by Gregg Easterbrook in The New York Times (November 19, 1999) about genetically engineered crops, under the headline: "Food for the Future: Someday, rice will have built-in vitamin A. Unless the Luddites win."

Are Amory and Hunter Lovins Luddites? Certainly not. I believe we all would agree that golden rice, with its built-in vitamin A, is probably a good thing, if developed with proper care and respect for the likely dangers in moving genes across species boundaries.

Awareness of the dangers inherent in genetic engineering is beginning to grow, as reflected in the Lovins' editorial. The general public is aware of, and uneasy about, genetically modified foods, and seems to be rejecting the notion that such foods should be permitted to be unlabeled.

Labeling increases the amount of true and useful information available, and is a good thing. Think of it as evolution in action. If genetically modified food enhances our survival and our essential humanity, then it is a good thing. If not, then the demand for it will die away with the people who believe in it.

But genetic engineering technology is already very far along. As the Lovins note, the USDA has already approved about 50 genetically engineered crops for unlimited release; more than half of the world's soybeans and a third of its corn now contain genes spliced in from other forms of life.

Bacteria swap genes all the time – that is one reason that they become resistant to antibiotics so quickly.

While there are many important issues here, my own major concern with genetic engineering is narrower: that it gives the power - whether militarily, accidentally, or in a deliberate terrorist act - to create a White Plague.

The deliberate acts should be our major concern. And that is why it is so important to understand human evil. Unfortunately, some philosophies and religions do not really understand it correctly (see my summary above of Peck’s discoveries about evil).

The many wonders of nanotechnology were first imagined by the Nobel-laureate physicist Richard Feynman in a speech he gave in 1959, subsequently published under the title "There's Plenty of Room at the Bottom." The book that made a big impression on me, in the mid-'80s, was Eric Drexler's Engines of Creation, in which he described beautifully how manipulation of matter at the atomic level could create a utopian future of abundance, where just about everything could be made cheaply, and almost any imaginable disease or physical problem could be solved using nanotechnology and artificial intelligences.

A subsequent book, Unbounding the Future: The Nanotechnology Revolution, which Drexler cowrote, imagines some of the changes that might take place in a world where we had molecular-level "assemblers." Assemblers could make possible incredibly low-cost solar power, cures for cancer and the common cold by augmentation of the human immune system, essentially complete cleanup of the environment, incredibly inexpensive pocket supercomputers - in fact, any product would be manufacturable by assemblers at a cost no greater than that of wood - spaceflight more accessible than transoceanic travel today, and restoration of extinct species.

I remember feeling good about nanotechnology after reading Engines of Creation.

Bill Joy and I had very different reactions. I was depressed for months. Keith Henson pulled me out of it eventually, but those visions haunted me for a long time. They still do. At the same time, we must sharpen our vision, and his paper was a great stimulus for starting that process.

As a technologist, it gave me a sense of calm - that is, nanotechnology showed us that incredible progress was possible, and indeed perhaps inevitable. If nanotechnology was our future, then I didn't feel pressed to solve so many problems in the present. I would get to Drexler's utopian future in due time; I might as well enjoy life more in the here and now. It didn't make sense, given his vision, to stay up all night, all the time.

Drexler's vision also led to a lot of good fun. I would occasionally get to describe the wonders of nanotechnology to others who had not heard of it. After teasing them with all the things Drexler described I would give a homework assignment of my own: "Use nanotechnology to create a vampire; for extra credit create an antidote."

With these wonders came clear dangers, of which I was acutely aware. As I said at a nanotechnology conference in 1989, "We can't simply do our science and not worry about these ethical issues."5 But my subsequent conversations with physicists convinced me that nanotechnology might not even work - or, at least, it wouldn't work anytime soon. Shortly thereafter I moved to Colorado, to a skunk works I had set up, and the focus of my work shifted to software for the Internet, specifically on ideas that became Java and Jini.

Then, last summer, Brosl Hasslacher told me that nanoscale molecular electronics was now practical. This was new news, at least to me, and I think to many people - and it radically changed my opinion about nanotechnology. It sent me back to Engines of Creation. Rereading Drexler's work after more than 10 years, I was dismayed to realize how little I had remembered of its lengthy section called "Dangers and Hopes," including a discussion of how nanotechnologies can become "engines of destruction." Indeed, in my rereading of this cautionary material today, I am struck by how naive some of Drexler's safeguard proposals seem, and how much greater I judge the dangers to be now than even he seemed to then. (Having anticipated and described many technical and political problems with nanotechnology, Drexler started the Foresight Institute in the late 1980s "to help prepare society for anticipated advanced technologies" - most important, nanotechnology.)

The enabling breakthrough to assemblers seems quite likely within the next 20 years.

With the work that is going on at Zyvex, and with Clinton’s National Nanotech Initiative, it looks like less than that for simple assemblers. Some are expecting the Feynman prize to be won in six years.

Molecular electronics - the new subfield of nanotechnology where individual molecules are circuit elements - should mature quickly and become enormously lucrative within this decade, causing a large incremental investment in all nanotechnologies.

Unfortunately, as with nuclear technology, it is far easier to create destructive uses for nanotechnology than constructive ones. Nanotechnology has clear military and terrorist uses, and you need not be suicidal to release a massively destructive nanotechnological device - such devices can be built to be selectively destructive, affecting, for example, only a certain geographical area or a group of people who are genetically distinct.

An immediate consequence of the Faustian bargain in obtaining the great power of nanotechnology is that we run a grave risk - the risk that we might destroy the biosphere on which all life depends.

As Drexler explained:

"Plants" with "leaves" no more efficient than today's solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous "bacteria" could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop - at least if we make no preparation. We have trouble enough controlling viruses and fruit flies.

Among the cognoscenti of nanotechnology, this threat has become known as the "gray goo problem." Though masses of uncontrolled replicators need not be gray or gooey, the term "gray goo" emphasizes that replicators able to obliterate life might be less inspiring than a single species of crabgrass. They might be superior in an evolutionary sense, but this need not make them valuable.

Valuable? Valuable to whom? How does the value of someone or something get determined? What is Drexler’s objective metric? Protagoras (the sophist who said that "Man is the measure of all things") was obviously wrong.

The gray goo threat makes one thing perfectly clear: We cannot afford certain kinds of accidents with replicating assemblers.

Gray goo would surely be a depressing ending to our human adventure on Earth, far worse than mere fire or ice, and one that could stem from a simple laboratory accident.6 Oops.

Building an ecophage (gray goo) is more difficult than it seems -- we would have to build it on purpose. It would be easy to design self-replicators that are easy to control by (for example) keeping the instruction set outside the replicator.

It is most of all the power of destructive self-replication in genetics, nanotechnology, and robotics (GNR) that should give us pause. Self-replication is the modus operandi of genetic engineering, which uses the machinery of the cell to replicate its designs, and the prime danger underlying gray goo in nanotechnology. Stories of run-amok robots like the Borg, replicating or mutating to escape from the ethical constraints imposed on them by their creators, are well established in our science fiction books and movies.

But are those stories true? We've got hundreds of thousand of robots and other automated devices in use today. How often does one "run amok"? Especially the more complicated ones (i.e. computers and networks)? Yes, we have email viruses, forwarded by ignorant newbies. In addition, our understanding of how our immune system works is lacking, and that ignorance is apparent in the way we design computer systems.

It is even possible that self-replication may be more fundamental than we thought, and hence harder - or even impossible - to control.

Impossible? If self-replication were more fundamental, don't you think that in two billion years, Mother Nature would have stumbled on other, more efficient paradigms, especially in extreme environments?

A recent article by Stuart Kauffman in Nature titled "Self-Replication: Even Peptides Do It" discusses the discovery that a 32-amino-acid peptide can "autocatalyse its own synthesis." We don't know how widespread this ability is, but Kauffman notes that it may hint at "a route to self-reproducing molecular systems on a basis far wider than Watson-Crick base-pairing."7

Autocatalysis is similar to crystalization -- it requires an adequately prepared substrate and has no provision for Von Neumann's notion of a universal constructor (which is what a ribosome is). Nor does autocatalysis make use of Von Nuemann’s GRP (Genotype, Ribotype, Phenotype) conceptualization.

In truth, we have had in hand for years clear warnings of the dangers inherent in widespread knowledge of GNR technologies - of the possibility of knowledge alone enabling mass destruction. But these warnings haven't been widely publicized; the public discussions have been clearly inadequate. There is no profit in publicizing the dangers.

The nuclear, biological, and chemical (NBC) technologies used in 20th-century weapons of mass destruction were and are largely military, developed in government laboratories. In sharp contrast, the 21st-century GNR technologies have clear commercial uses and are being developed almost exclusively by corporate enterprises. In this age of triumphant commercialism, technology - with science as its handmaiden - is delivering a series of almost magical inventions that are the most phenomenally lucrative ever seen. We are aggressively pursuing the promises of these new technologies within the now-unchallenged system of global capitalism and its manifold financial incentives and competitive pressures.

This is the first moment in the history of our planet when any species, by its own voluntary actions, has become a danger to itself - as well as to vast numbers of others.

It might be a familiar progression, transpiring on many worlds - a planet, newly formed, placidly revolves around its star; life slowly forms; a kaleidoscopic procession of creatures evolves; intelligence emerges which, at least up to a point, confers enormous survival value; and then technology is invented. It dawns on them that there are such things as laws of Nature, that these laws can be revealed by experiment, and that knowledge of these laws can be made both to save and to take lives, both on unprecedented scales. Science, they recognize, grants immense powers. In a flash, they create world-altering contrivances. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others, not so lucky or so prudent, perish.

According to Fermi's Paradox, either they all perish, or they never get started in the first place.

That is Carl Sagan, writing in 1994, in Pale Blue Dot, a book describing his vision of the human future in space. I am only now realizing how deep his insight was, and how sorely I miss, and will miss, his voice. For all its eloquence, Sagan's contribution was not least that of simple common sense - an attribute that, along with humility, many of the leading advocates of the 21st-century technologies seem to lack.

Carl Sagan preferred to believe in billions and billions of extraterrestrial civilizations, for which there is no scrap of evidence. If there were even just one, its presence would be as obvious as contrails in a blue sky, or skyscrapers in a jungle. Actually, its presence might be even more obvious – it would be written into every star, and into every cell of our bodies.

Sagan? Humility? In the same sentence? He was a brilliant scientist, a great writer, and a very valuable popularizer of science, but humble? I would agree that our 21st century needs humility desperately, but if I were looking for powerful examples of humility, I would rather look to Mahatma Gandhi, Jimmy Carter, or Mother Teresa.

 

I remember from my childhood that my grandmother was strongly against the overuse of antibiotics. She had worked since before the first World War as a nurse and had a commonsense attitude that taking antibiotics, unless they were absolutely necessary, was bad for you.

It is not that she was an enemy of progress. She saw much progress in an almost 70-year nursing career; my grandfather, a diabetic, benefited greatly from the improved treatments that became available in his lifetime. But she, like many levelheaded people, would probably think it greatly arrogant for us, now, to be designing a robotic "replacement species," when we obviously have so much trouble making relatively simple things work, and so much trouble managing - or even understanding - ourselves.

In that case, it seems obvious to me that we need better tools to help us understand. Some of these are simple – small worlds theory, for example, shows us in clearly analytical ways how interconnected we are to each stranger we pass on the street. Other tools, like the Web, are a bit more complex, but may help us devise tools (e.g. CritSuite) with which we may better understand controversial issues.

I realize now that she had an awareness of the nature of the order of life, and of the necessity of living with and respecting that order. With this respect comes a necessary humility that we, with our early-21st-century chutzpah, lack at our peril. The commonsense view, grounded in this respect, is often right, in advance of the scientific evidence. The clear fragility and inefficiencies of the human-made systems we have built should give us all pause; the fragility of the systems I have worked on certainly humbles me.

I can’t disagree strongly with Bill Joy’s grandmother. Heck, I keep a compost heap and recycle, plus I don’t take antibiotics unless absolutely necessary. At the same time, I wonder about some of the specifics of the "awareness of the nature of the order of life." We agree that there is such a thing, but the devil is in the details. What exactly is this "order of life"? From whence does it derive? What is its essence? We must find out the answers to these questions, for we stand at a threshold similar not to the invention of fire, but the invention of chlorophyll. We can’t go back to Kansas any more. You can’t return to the safety of the womb, or the security of the Garden of Eden. Say goodbye to the old order, and embrace the new order, because you -- and especially the essence of who you are -- will determine it. That is why the future needs Bill Joy. That is why the future needs each one of us. And if we do it right, the future will embrace us too. As portrayed so clearly in Terminator II, we are creating the future now.

We should have learned a lesson from the making of the first atomic bomb and the resulting arms race. We didn't do well then, and the parallels to our current situation are troubling.

The effort to build the first atomic bomb was led by the brilliant physicist J. Robert Oppenheimer. Oppenheimer was not naturally interested in politics but became painfully aware of what he perceived as the grave threat to Western civilization from the Third Reich, a threat surely grave because of the possibility that Hitler might obtain nuclear weapons. Energized by this concern, he brought his strong intellect, passion for physics, and charismatic leadership skills to Los Alamos and led a rapid and successful effort by an incredible collection of great minds to quickly invent the bomb.

What is striking is how this effort continued so naturally after the initial impetus was removed. In a meeting shortly after V-E Day with some physicists who felt that perhaps the effort should stop, Oppenheimer argued to continue. His stated reason seems a bit strange: not because of the fear of large casualties from an invasion of Japan, but because the United Nations, which was soon to be formed, should have foreknowledge of atomic weapons. A more likely reason the project continued is the momentum that had built up - the first atomic test, Trinity, was nearly at hand.

We know that in preparing this first atomic test the physicists proceeded despite a large number of possible dangers. They were initially worried, based on a calculation by Edward Teller, that an atomic explosion might set fire to the atmosphere. A revised calculation reduced the danger of destroying the world to a three-in-a-million chance. (Teller says he was later able to dismiss the prospect of atmospheric ignition entirely.) Oppenheimer, though, was sufficiently concerned about the result of Trinity that he arranged for a possible evacuation of the southwest part of the state of New Mexico. And, of course, there was the clear danger of starting a nuclear arms race.

Within a month of that first, successful test, two atomic bombs destroyed Hiroshima and Nagasaki. Some scientists had suggested that the bomb simply be demonstrated, rather than dropped on Japanese cities - saying that this would greatly improve the chances for arms control after the war - but to no avail. With the tragedy of Pearl Harbor still fresh in Americans' minds, it would have been very difficult for President Truman to order a demonstration of the weapons rather than use them as he did - the desire to quickly end the war and save the lives that would have been lost in any invasion of Japan was very strong. Yet the overriding truth was probably very simple: As the physicist Freeman Dyson later said, "The reason that it was dropped was just that nobody had the courage or the foresight to say no."

It's important to realize how shocked the physicists were in the aftermath of the bombing of Hiroshima, on August 6, 1945. They describe a series of waves of emotion: first, a sense of fulfillment that the bomb worked, then horror at all the people that had been killed, and then a convincing feeling that on no account should another bomb be dropped. Yet of course another bomb was dropped, on Nagasaki, only three days after the bombing of Hiroshima.

In November 1945, three months after the atomic bombings, Oppenheimer stood firmly behind the scientific attitude, saying, "It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing which is of intrinsic value to humanity, and that you are using it to help in the spread of knowledge and are willing to take the consequences."

I would go farther than that. It is not possible to be human with having a sense of wonder, without being curious, and without being who we are – tool makers and tool users. If you try to extinguish that essence, you will be attempting a social reconstruction task that makes B.F. Skinner and Stalin look like amateur garden club busybodies. Knowledge of the world does give us power. Knowledge also is a thing of intrinsic value, for while ignorance may be bliss, it is eventually fatal in the long run. The correct answer to the truism that "a little knowledge is dangerous", is "gain more complete knowledge". In this case, this means the knowledge of how to live with this power, and the self-control to make our desires our servant, not our master.

Oppenheimer went on to work, with others, on the Acheson-Lilienthal report, which, as Richard Rhodes says in his recent book Visions of Technology, "found a way to prevent a clandestine nuclear arms race without resorting to armed world government"; their suggestion was a form of relinquishment of nuclear weapons work by nation-states to an international agency.

This proposal led to the Baruch Plan, which was submitted to the United Nations in June 1946 but never adopted (perhaps because, as Rhodes suggests, Bernard Baruch had "insisted on burdening the plan with conventional sanctions," thereby inevitably dooming it, even though it would "almost certainly have been rejected by Stalinist Russia anyway"). Other efforts to promote sensible steps toward internationalizing nuclear power to prevent an arms race ran afoul either of US politics and internal distrust, or distrust by the Soviets. The opportunity to avoid the arms race was lost, and very quickly.

Two years later, in 1948, Oppenheimer seemed to have reached another stage in his thinking, saying, "In some sort of crude sense which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin; and this is a knowledge they cannot lose."

Oppenheimer spoke correctly. But we got ourselves kicked out of the Garden of Eden a long time ago. Deal with it. How do we climb back in the womb? How do we keep from being born? We can’t. We have eaten from the tree of knowledge, and the fruit of the tree of ignorance is poison. The only alternative is to find out what important knowledge we are missing. In this case, we must begin with knowing ourselves.

In 1949, the Soviets exploded an atom bomb. By 1955, both the US and the Soviet Union had tested hydrogen bombs suitable for delivery by aircraft. And so the nuclear arms race began.

Nearly 20 years ago, in the documentary The Day After Trinity, Freeman Dyson summarized the scientific attitudes that brought us to the nuclear precipice:

"I have felt it myself. The glitter of nuclear weapons. It is irresistible if you come to them as a scientist. To feel it's there in your hands, to release this energy that fuels the stars, to let it do your bidding. To perform these miracles, to lift a million tons of rock into the sky. It is something that gives people an illusion of illimitable power, and it is, in some ways, responsible for all our troubles - this, what you might call technical arrogance, that overcomes people when they see what they can do with their minds."8

Now, as then, we are creators of new technologies and stars of the imagined future, driven - this time by great financial rewards and global competition - despite the clear dangers, hardly evaluating what it may be like to try to live in a world that is the realistic outcome of what we are creating and imagining.

Don’t forget the lure of immortality and godhood. What is money and completion next to those enticements? Who can resist the temptation to which Adam and Eve succumbed? Or the seduction of eternal youth? You’re going to stand between the people and those temptations? Ha! There won’t be enough left of you for forensic analysis.

In 1947, The Bulletin of the Atomic Scientists began putting a Doomsday Clock on its cover. For more than 50 years, it has shown an estimate of the relative nuclear danger we have faced, reflecting the changing international conditions. The hands on the clock have moved 15 times and today, standing at nine minutes to midnight, reflect continuing and real danger from nuclear weapons. The recent addition of India and Pakistan to the list of nuclear powers has increased the threat of failure of the nonproliferation goal, and this danger was reflected by moving the hands closer to midnight in 1998.

In our time, how much danger do we face, not just from nuclear weapons, but from all of these technologies? How high are the extinction risks?

The philosopher John Leslie has studied this question and concluded that the risk of human extinction is at least 30 percent,9 while Ray Kurzweil believes we have "a better than even chance of making it through," with the caveat that he has "always been accused of being an optimist." Not only are these estimates not encouraging, but they do not include the probability of many horrid outcomes that lie short of extinction.

Faced with such assessments, some serious people are already suggesting that we simply move beyond Earth as quickly as possible. We would colonize the galaxy using von Neumann probes, which hop from star system to star system, replicating as they go. This step will almost certainly be necessary 5 billion years from now (or sooner if our solar system is disastrously impacted by the impending collision of our galaxy with the Andromeda galaxy within the next 3 billion years), but if we take Kurzweil and Moravec at their word it might be necessary by the middle of this century.

Undoubtedly. But as Chris Peterson pointed out, "Let’s say that we somehow scraped together $100 billion to build something that could travel 1% of lightspeed. Assuming that after the assembler breakthrough Moore’s law will hold true for any engineering field, that means that 18 months after you launch, a $50 billion ship follows you at 2% of lightspeed, catching up to you 18 months later. And 36 months later a $25 billion ship, etc. There is no way to escape the nanotech shockwave."

What are the moral implications here? If we must move beyond Earth this quickly in order for the species to survive, who accepts the responsibility for the fate of those (most of us, after all) who are left behind?

They are adults. They can take care of themselves. If not, they are responsible for the consequences of their own actions.

And even if we scatter to the stars, isn't it likely that we may take our problems with us or find, later, that they have followed us?

Definitely. Pogo said, "There is no problem so big and so complicated that it can’t be run away from." But you can’t run away from yourself. At the same time, there are many people in this country who ran away from powerful forces (dictatorships, monopolies, etc.) that were too big to fight. The United States began as an experimental model of economic and governmental design, and it was made possible by the power vacuum of the New World. Scattering to the far stars offers the same opportunity for experimenting with new social forms that, if successful, can be copied and implemented here on Earth, and elsewhere.

The fate of our species on Earth and our fate in the galaxy seem inextricably linked.

Well, you got that right, but that link runs in both directions. See the joint position paper of the National Space Society and the Foresight Institute (http://www.islandone.org/MMSG/NSSNanoPosition.html)

Another idea is to erect a series of shields to defend against each of the dangerous technologies. The Strategic Defense Initiative, proposed by the Reagan administration, was an attempt to design such a shield against the threat of a nuclear attack from the Soviet Union. But as Arthur C. Clarke, who was privy to discussions about the project, observed: "Though it might be possible, at vast expense, to construct local defense systems that would 'only' let through a few percent of ballistic missiles, the much touted idea of a national umbrella was nonsense. Luis Alvarez, perhaps the greatest experimental physicist of this century, remarked to me that the advocates of such schemes were 'very bright guys with no common sense.'"

Clarke is forgetting that the inefficient socialistic economy of the USSR simply could not afford an SDI. And they knew Clarke’s Law well enough to know they couldn’t afford not to attempt one. All it took was Reagan’s saber-rattling of SDI, and the Pope’s visiting Poland, to topple the rotting evil empire. So SDI did work.

Clarke continued: "Looking into my often cloudy crystal ball, I suspect that a total defense might indeed be possible in a century or so. But the technology involved would produce, as a by-product, weapons so terrible that no one would bother with anything as primitive as ballistic missiles." 10

A century? Here I would quote Clarke’s Law, though the target is especially ironic in this case. At the same time, Arthur is correct that nanotech makes ballistic missiles primitive.

In Engines of Creation, Eric Drexler proposed that we build an active nanotechnological shield - a form of immune system for the biosphere - to defend against dangerous replicators of all kinds that might escape from laboratories or otherwise be maliciously created. But the shield he proposed would itself be extremely dangerous - nothing could prevent it from developing autoimmune problems and attacking the biosphere itself. 11

I wouldn’t go as far as to say nothing, but maliciously created replicators are difficult to defend against. Robert Frietas has investigated some ways of detecting ecophages, which is the first step. One of our problems is that our understanding of mammalian immune systems is still very primitive.

Similar difficulties apply to the construction of shields against robotics and genetic engineering. These technologies are too powerful to be shielded against in the time frame of interest; even if it were possible to implement defensive shields, the side effects of their development would be at least as dangerous as the technologies we are trying to protect against.

Are you saying that Norton and McAffe are not protecting a sizable portion of the web?

These possibilities are all thus either undesirable or unachievable or both. The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.

The only realistic alternative? Is ignorance bliss? No and no. Ignorance is fatal, and the only realistic alternative is to grow up, using the web as a tool for understanding. Though CritSuite is not catching on as we would like, the web is quickly permeating the social fabric in such a way that the web may be used as a mediating structure.

Yes, I know, knowledge is good, as is the search for new truths. We have been seeking knowledge since ancient times. Aristotle opened his Metaphysics with the simple statement: "All men by nature desire to know." We have, as a bedrock value in our society, long agreed on the value of open access to information, and recognize the problems that arise with attempts to restrict access to and development of knowledge. In recent times, we have come to revere scientific knowledge.

But despite the strong historical precedents, if open access to and unlimited development of knowledge henceforth puts us all in clear danger of extinction, then common sense demands that we reexamine even these basic, long-held beliefs.

Yes, these basic long-held beliefs should be re-examined by every generation. But then ask yourself if you want to redesign human nature until we have no desire to know. Cripes, give me alien invasion or gray goo any day, rather than these human-shaped drones that have no desire to know. The problem is not too much knowledge, but too little, and too little self-control. Besides, as I mentioned before, the problem with knowledge is not that it gives us power, but that unequal distribution tempts the powerful to abuse the powerless.

It was Nietzsche who warned us, at the end of the 19th century, not only that God is dead but that "faith in science, which after all exists undeniably, cannot owe its origin to a calculus of utility; it must have originated in spite of the fact that the disutility and dangerousness of the 'will to truth,' of 'truth at any price' is proved to it constantly."

Nietzsche made a lot of mistakes, and his assumptions about reality should be closely examined. He attempted to build an objective moral and ethical system that denied the objective existence of good and evil, and he failed miserably. He did not know, as we do, about the Big Bang, or of the sensitivity to initial conditions that fine-tuned the fundamental forces to make our existence possible. He dismissed the God hypothesis much too easily.

At the same time, he correctly claimed that "faith in science" is an act of faith, based on an unprovable assumption that the universe is ordered (though he never asked why it is ordered). He was also correct that "truth at any price" can be dangerous. For some reason, Nietzsche did not seem to grasp that the mediating principle for truth is love. Someone should do a close examination of his personal life to find out why.

It is this further danger that we now fully face - the consequences of our truth-seeking. The truth that science seeks can certainly be considered a dangerous substitute for God if it is likely to lead to our extinction.

Substituting anything for God is dangerous no matter what. People make false gods out of fame, money, power, a job, luck, the Bible, or a church. False gods make false promises, and leave you empty. And you thought that the first commandment was written to protect Yahweh’s fragile ego. Ha!

If we could agree, as a species, what we wanted, where we were headed, and why, then we would make our future much less dangerous - then we might understand what we can and should relinquish. Otherwise, we can easily imagine an arms race developing over GNR technologies, as it did with the NBC technologies in the 20th century. This is perhaps the greatest risk, for once such a race begins, it's very hard to end it. This time - unlike during the Manhattan Project - we aren't in a war, facing an implacable enemy that is threatening our civilization; we are driven, instead, by our habits, our desires, our economic system, and our competitive need to know.

You keep forgetting the lure of eternal youth, which is driven by our fear of death.

I believe that we all wish our course could be determined by our collective values, ethics, and morals. If we had gained more collective wisdom over the past few thousand years, then a dialogue to this end would be more practical, and the incredible powers we are about to unleash would not be nearly so troubling.

Don’t you think that an enhanced web will help everyone realize which collective values are true? We have gained substantial collective wisdom in the past few thousand years – the problem is that we’ve also gained a lot of bogus garbage, including a lot of self-contradictory lies, and sifting through all that is difficult.

One would think we might be driven to such a dialogue by our instinct for self-preservation. Individuals clearly have this desire, yet as a species our behavior seems to be not in our favor.

Oh, we certainly have an instinct for self-preservation. But even more powerful drives push us into self-destructive behavior. For example, in this day of AIDS, STDs, and school-taught contraception, why do so many young women get pregnant from unprotected sex? For what are they willing to risk death? What do they really need that they are trying (unsuccessfully) to get? Bill Joy and I are like every other human being – we need to love and be loved.

In dealing with the nuclear threat, we often spoke dishonestly to ourselves and to each other, thereby greatly increasing the risks. Whether this was politically motivated, or because we chose not to think ahead, or because when faced with such grave threats we acted irrationally out of fear, I do not know, but it does not bode well.

That is unfortunately true -- we lie to ourselves and to each other, even when our survival depends on us telling the truth, and listening to it.

The new Pandora's boxes of genetics, nanotechnology, and robotics are almost open, yet we seem hardly to have noticed. Ideas can't be put back in a box; unlike uranium or plutonium, they don't need to be mined and refined, and they can be freely copied. Once they are out, they are out. Churchill remarked, in a famous left-handed compliment, that the American people and their leaders "invariably do the right thing, after they have examined every other alternative." In this case, however, we must act more presciently, as to do the right thing only at last may be to lose the chance to do it at all.

True. But what is the right thing? Or, to quote a famous politician, "What is truth?"

As Thoreau said, "We do not ride on the railroad; it rides upon us"; and this is what we must fight, in our time. The question is, indeed, Which is to be master? Will we survive our technologies?

Some of us will not. But it is not the technologies, but their evil use that we need to fear. This means that we do not have to worry about technology being the master. We do need to worry about becoming slaves to greed, power, money, or pleasure. And we need to worry about our friends succumbing to the same temptations.

We are being propelled into this new century with no plan, no control, no brakes.

Bill Joy seems to be indicating that we need to slam on the non-existing brakes. Good luck.

Have we already gone too far down the path to alter course? I don't believe so, but we aren't trying yet, and the last chance to assert control - the fail-safe point - is rapidly approaching. We have our first pet robots, as well as commercially available genetic engineering techniques, and our nanoscale techniques are advancing rapidly. While the development of these technologies proceeds through a number of steps, it isn't necessarily the case - as happened in the Manhattan Project and the Trinity test - that the last step in proving a technology is large and hard. The breakthrough to wild self-replication in robotics, genetic engineering, or nanotechnology could come suddenly, reprising the surprise we felt when we learned of the cloning of a mammal.

And yet I believe we do have a strong and solid basis for hope. Our attempts to deal with weapons of mass destruction in the last century provide a shining example of relinquishment for us to consider: the unilateral US abandonment, without preconditions, of the development of biological weapons. This relinquishment stemmed from the realization that while it would take an enormous effort to create these terrible weapons, they could from then on easily be duplicated and fall into the hands of rogue nations or terrorist groups.

The clear conclusion was that we would create additional threats to ourselves by pursuing these weapons, and that we would be more secure if we did not pursue them. We have embodied our relinquishment of biological and chemical weapons in the 1972 Biological Weapons Convention (BWC) and the 1993 Chemical Weapons Convention (CWC).12

As for the continuing sizable threat from nuclear weapons, which we have lived with now for more than 50 years, the US Senate's recent rejection of the Comprehensive Test Ban Treaty makes it clear relinquishing nuclear weapons will not be politically easy. But we have a unique opportunity, with the end of the Cold War, to avert a multipolar arms race. Building on the BWC and CWC relinquishments, successful abolition of nuclear weapons could help us build toward a habit of relinquishing dangerous technologies. (Actually, by getting rid of all but 100 nuclear weapons worldwide - roughly the total destructive power of World War II and a considerably easier task - we could eliminate this extinction threat. 13)

Verifying relinquishment will be a difficult problem, but not an unsolvable one. We are fortunate to have already done a lot of relevant work in the context of the BWC and other treaties. Our major task will be to apply this to technologies that are naturally much more commercial than military. The substantial need here is for transparency, as difficulty of verification is directly proportional to the difficulty of distinguishing relinquished from legitimate activities.

A essential book that addresses this problem is David Brin’s "The Transparent Society". He has some great insights on verification. A lot of people won’t like what he describes, though…

I frankly believe that the situation in 1945 was simpler than the one we now face: The nuclear technologies were reasonably separable into commercial and military uses, and monitoring was aided by the nature of atomic tests and the ease with which radioactivity could be measured. Research on military applications could be performed at national laboratories such as Los Alamos, with the results kept secret as long as possible.

The GNR technologies do not divide clearly into commercial and military uses; given their potential in the market, it's hard to imagine pursuing them only in national laboratories. With their widespread commercial pursuit, enforcing relinquishment will require a verification regime similar to that for biological weapons, but on an unprecedented scale. This, inevitably, will raise tensions between our individual privacy and desire for proprietary information, and the need for verification to protect us all. We will undoubtedly encounter strong resistance to this loss of privacy and freedom of action.

No kidding! J Should we pay for privacy? Not in terms of dollars, but perhaps in terms of trust?

Verifying the relinquishment of certain GNR technologies will have to occur in cyberspace as well as at physical facilities. The critical issue will be to make the necessary transparency acceptable in a world of proprietary information, presumably by providing new forms of protection for intellectual property.

Transparency is the word, as David Brin proposes.

Verifying compliance will also require that scientists and engineers adopt a strong code of ethical conduct, resembling the Hippocratic oath, and that they have the courage to whistleblow as necessary, even at high personal cost. This would answer the call - 50 years after Hiroshima - by the Nobel laureate Hans Bethe, one of the most senior of the surviving members of the Manhattan Project, that all scientists "cease and desist from work creating, developing, improving, and manufacturing nuclear weapons and other weapons of potential mass destruction."14 In the 21st century, this requires vigilance and personal responsibility by those who would work on both NBC and GNR technologies to avoid implementing weapons of mass destruction and knowledge-enabled mass destruction.

We need to have the infrastructure to quickly disseminate antibodies and anti-virus mechanisms.

Thoreau also said that we will be "rich in proportion to the number of things which we can afford to let alone." We each seek to be happy, but it would seem worthwhile to question whether we need to take such a high risk of total destruction to gain yet more knowledge and yet more things; common sense says that there is a limit to our material needs - and that certain knowledge is too dangerous and is best forgone.

Is ignorance preferable? Let’s face it, evil exists. Do you want to make yourself into sheep, to be driven before the wolves? You might forsake nanorobots, but how would you guarantee that nobody else would pursue them? Without turning into a wolf yourself?

Neither should we pursue near immortality without considering the costs, without considering the commensurate increase in the risk of extinction. Immortality, while perhaps the original, is certainly not the only possible utopian dream.

But once everyone has eternal youth, the other dreams become more likely.

I recently had the good fortune to meet the distinguished author and scholar Jacques Attali, whose book Lignes d'horizons (Millennium, in the English translation) helped inspire the Java and Jini approach to the coming age of pervasive computing, as previously described in this magazine. In his new book Fraternités, Attali describes how our dreams of utopia have changed over time:

"At the dawn of societies, men saw their passage on Earth as nothing more than a labyrinth of pain, at the end of which stood a door leading, via their death, to the company of gods and to Eternity. With the Hebrews and then the Greeks, some men dared free themselves from theological demands and dream of an ideal City where Liberty would flourish. Others, noting the evolution of the market society, understood that the liberty of some would entail the alienation of others, and they sought Equality."

Jacques helped me understand how these three different utopian goals exist in tension in our society today. He goes on to describe a fourth utopia, Fraternity, whose foundation is altruism.

Interestingly enough, Axelrod discovered in his prisoner’s dilemma competitions that altruism has survival value, even for non-biological entities.

Fraternity alone associates individual happiness with the happiness of others, affording the promise of self-sustainment.

This crystallized for me my problem with Kurzweil's dream. A technological approach to Eternity - near immortality through robotics - may not be the most desirable utopia, and its pursuit brings clear dangers. Maybe we should rethink our utopian choices.

Where can we look for a new ethical basis to set our course? I have found the ideas in the book Ethics for the New Millennium, by the Dalai Lama, to be very helpful.

What is wrong with the old ethical basis? In fact, the Dalai Lama’s ethical basis is pretty old too. If we think of it as evolution in action, any old ethical basis (on average, based on its survival) is going to better than any new one. By the way, did anyone notice that in Terminator II, John Conner has no explanation for why we’re not supposed to go around killing people, other than "just trust me on this"? Why does Conner not have a stronger basis?

Bill Joy says that the Dalai Lama’s writing is "helpful". Why does he seem to be afraid to say that the Dalai Lama is right? Is Bill Joy suffering from the "loss of nerve" that brought down every civilization now extinct? Pope John Paul II echoes the Dalai Lama, saying that we must build a civilization of love. This convergence is rather interesting, given the widely different origins of Buddhism and Catholicism (Hinduism and Judaism, respectively).

As is perhaps well known but little heeded, the Dalai Lama argues that the most important thing is for us to conduct our lives with love and compassion for others, and that our societies need to develop a stronger notion of universal responsibility and of our interdependency; he proposes a standard of positive ethical conduct for individuals and societies that seems consonant with Attali's Fraternity utopia.

The Dalai Lama further argues that we must understand what it is that makes people happy, and acknowledge the strong evidence that neither material progress nor the pursuit of the power of knowledge is the key - that there are limits to what science and the scientific pursuit alone can do.

Mystics and religious leaders have been saying similar things for millennia.

Our Western notion of happiness seems to come from the Greeks, who defined it as "the exercise of vital powers along lines of excellence in a life affording them scope." 15

We have learned a little in the past two and a half millennia. We are happy when we know the truth, and live it; we are happy when we are then fulfilled – when our lives are full of meaning. And our lives are most full of meaning when we give of ourselves.

Perhaps the problem is that as computer geeks, Bill Joy and I have only pursued our "vital powers" along a limited number of "lines of excellence". The Greeks recognized that the entire person must cultivate a wide variety of skills. I don’t know about Bill, but have I certainly neglected my artistic creation, and while I’m still young enough to enjoy working out with college wrestling and gymnastics teams, that won’t last much longer -- unless the nanotech revolution hits soon. But even if breakthrough happens later rather than earlier, we can still exercise our vital powers in emotional and spiritual planes. And I’ll bet that such exercise will bring us happiness.

Clearly, we need to find meaningful challenges and sufficient scope in our lives if we are to be happy in whatever is to come. But I believe we must find alternative outlets for our creative forces, beyond the culture of perpetual economic growth;

 

Funny, I’ve heard lots of Catholic saints, Buddhist sages, and Jewish prophets say the same thing. But the nanotech revolution is not driven by money – dollars are simply a way to efficiently distribute the wealth generated by a culture, and to measure demand at a particular supply. What drives the demand?

This growth has largely been a blessing for several hundred years, but it has not brought us unalloyed happiness, and we must now choose between the pursuit of unrestricted and undirected growth through science and technology and the clear accompanying dangers.

So is Bill Joy saying that we should restrict and direct that growth in such a way that the dangers are avoided? What exactly are the choices?

It is now more than a year since my first encounter with Ray Kurzweil and John Searle. I see around me cause for hope in the voices for caution and relinquishment and in those people I have discovered who are as concerned as I am about our current predicament. I feel, too, a deepened sense of personal responsibility - not for the work I have already done, but for the work that I might yet do, at the confluence of the sciences.

But many other people who know about the dangers still seem strangely silent. When pressed, they trot out the "this is nothing new" riposte - as if awareness of what could happen is response enough. They tell me, There are universities filled with bioethicists who study this stuff all day long. They say, All this has been written about before, and by experts. They complain, Your worries and your arguments are already old hat.

Yeah, pro-death advocartes like Hardin and Singer. Sheeze. I am truly glad that a generalist like Bill Joy is involved. The topic of our future demands that every human being participates. (By the way, because he acts humanely toward his mother, Singer is starting to realize that his philosophy has holes (see http://www.neopolitique.org/articles/princeton_prof_nov'99.htm ) At any rate, it would be important to make those arguments public, and to discuss the counter-arguments.

I don't know where these people hide their fear. As an architect of complex systems I enter this arena as a generalist. But should this diminish my concerns? I am aware of how much has been written about, talked about, and lectured about so authoritatively.

Most people are not. Can we provide URLs?

But does this mean it has reached people? Does this mean we can discount the dangers before us?

We can discount them when we are past them. Sorry, I wish we could give a more comforting answer.

Knowing is not a rationale for not acting. Can we doubt that knowledge has become a weapon we wield against ourselves?

Yes, we can doubt, for ignorance causes doubt. Not ignorance about nanotech, but ignorance about how to handle that knowledge, ignorance about what it means to be human, and ignorance about what gives meaning to our lives.

The experiences of the atomic scientists clearly show the need to take personal responsibility, the danger that things will move too fast, and the way in which a process can take on a life of its own. We can, as they did, create insurmountable problems in almost no time flat. We must do more thinking up front if we are not to be similarly surprised and shocked by the consequences of our inventions.

Yes, and that is the purpose of the Foresight Institute.

My continuing professional work is on improving the reliability of software. Software is a tool, and as a toolbuilder I must struggle with the uses to which the tools I make are put. I have always believed that making software more reliable, given its many uses, will make the world a safer and better place; if I were to come to believe the opposite, then I would be morally obligated to stop this work. I can now imagine such a day may come.

I’m glad that Bill Joy has the guts to follow his moral obligations, and I fear how many people do not. But is his imagination working correctly?

This all leaves me not angry but at least a bit melancholic. Henceforth, for me, progress will be somewhat bittersweet.

I understand, and I sympathize. But I encourage Bill and everyone else to have hope.

Do you remember the beautiful penultimate scene in Manhattan where Woody Allen is lying on his couch and talking into a tape recorder? He is writing a short story about people who are creating unnecessary, neurotic problems for themselves, because it keeps them from dealing with more unsolvable, terrifying problems about the universe.

He leads himself to the question, "Why is life worth living?" and to consider what makes it worthwhile for him: Groucho Marx, Willie Mays, the second movement of the Jupiter Symphony, Louis Armstrong's recording of "Potato Head Blues," Swedish movies, Flaubert's Sentimental Education, Marlon Brando, Frank Sinatra, the apples and pears by Cézanne, the crabs at Sam Wo's, and, finally, the showstopper: his love Tracy's face.

ARGH! What about Tracy herself?! I really like Woody Allen’s comedies, but taking his philosophy seriously is foolhardy. Does Bill Joy realize that what Woody Allen is saying?

Each of us has our precious things, and as we care for them we locate the essence of our humanity. In the end, it is because of our great capacity for caring that I remain optimistic we will confront the dangerous issues now before us.

Is Bill Joy reducing the essence of human to the things we like? Argh!!! That’s worse than using a Ming vase as a chamber pot! He desperately needs to read Viktor Frankl's Man’s Search for Meaning. Frankl found purpose and meaning in life when all material possessions were taken from him, his family and friends were killed, and he was constantly surrounded by death and brutality.

My immediate hope is to participate in a much larger discussion of the issues raised here, with people from many different backgrounds, in settings not predisposed to fear or favor technology for its own sake.

CritSuite can help here.

 

As a start, I have twice raised many of these issues at events sponsored by the Aspen Institute and have separately proposed that the American Academy of Arts and Sciences take them up as an extension of its work with the Pugwash Conferences. (These have been held since 1957 to discuss arms control, especially of nuclear weapons, and to formulate workable policies.)

It's unfortunate that the Pugwash meetings started only well after the nuclear genie was out of the bottle - roughly 15 years too late. We are also getting a belated start on seriously addressing the issues around 21st-century technologies - the prevention of knowledge-enabled mass destruction - and further delay seems unacceptable.

So I'm still searching; there are many more things to learn. Whether we are to succeed or fail, to survive or fall victim to these technologies, is not yet decided. I'm up late again - it's almost 6 am. I'm trying to imagine some better answers, to break the spell and free them from the stone.

I’m glad that Bill Joy is searching. I hope that many join this search, and I hope that as technologists, we can come up with better intellectual tools to aid in that search.

1 The passage Kurzweil quotes is from Kaczynski's Unabomber Manifesto, which was published jointly, under duress, by The New York Times and The Washington Post to attempt to bring his campaign of terror to an end. I agree with David Gelernter, who said about their decision:

"It was a tough call for the newspapers. To say yes would be giving in to terrorism, and for all they knew he was lying anyway. On the other hand, to say yes might stop the killing. There was also a chance that someone would read the tract and get a hunch about the author; and that is exactly what happened. The suspect's brother read it, and it rang a bell.

"I would have told them not to publish. I'm glad they didn't ask me. I guess."

(Drawing Life: Surviving the Unabomber. Free Press, 1997: 120.)

2 Garrett, Laurie. The Coming Plague: Newly Emerging Diseases in a World Out of Balance. Penguin, 1994: 47-52, 414, 419, 452.

3 Isaac Asimov described what became the most famous view of ethical rules for robot behavior in his book I, Robot in 1950, in his Three Laws of Robotics: 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

4 Michelangelo wrote a sonnet that begins:

Non ha l' ottimo artista alcun concetto
Ch' un marmo solo in sè non circonscriva
Col suo soverchio; e solo a quello arriva
La man che ubbidisce all' intelleto.

Stone translates this as:

The best of artists hath no thought to show
which the rough stone in its superfluous shell
doth not include; to break the marble spell
is all the hand that serves the brain can do.

Stone describes the process: "He was not working from his drawings or clay models; they had all been put away. He was carving from the images in his mind. His eyes and hands knew where every line, curve, mass must emerge, and at what depth in the heart of the stone to create the low relief."

(The Agony and the Ecstasy. Doubleday, 1961: 6, 144.)

5 First Foresight Conference on Nanotechnology in October 1989, a talk titled "The Future of Computation." Published in Crandall, B. C. and James Lewis, editors. Nanotechnology: Research and Perspectives. MIT Press, 1992: 269. See also www.foresight.org/Conferences/MNT01/Nano1.html.

6 In his 1963 novel Cat's Cradle, Kurt Vonnegut imagined a gray-goo-like accident where a form of ice called ice-nine, which becomes solid at a much higher temperature, freezes the oceans.

7 Kauffman, Stuart. "Self-replication: Even Peptides Do It." Nature, 382, August 8, 1996: 496. See www.santafe.edu/sfi/People/kauffman/sak-peptides.html.

8 Else, Jon. The Day After Trinity: J. Robert Oppenheimer and The Atomic Bomb (available at www.pyramiddirect.com).

9 This estimate is in Leslie's book The End of the World: The Science and Ethics of Human Extinction, where he notes that the probability of extinction is substantially higher if we accept Brandon Carter's Doomsday Argument, which is, briefly, that "we ought to have some reluctance to believe that we are very exceptionally early, for instance in the earliest 0.001 percent, among all humans who will ever have lived. This would be some reason for thinking that humankind will not survive for many more centuries, let alone colonize the galaxy. Carter's doomsday argument doesn't generate any risk estimates just by itself. It is an argument for revising the estimates which we generate when we consider various possible dangers." (Routledge, 1996: 1, 3, 145.)

10 Clarke, Arthur C. "Presidents, Experts, and Asteroids." Science, June 5, 1998. Reprinted as "Science and Society" in Greetings, Carbon-Based Bipeds! Collected Essays, 1934-1998. St. Martin's Press, 1999: 526.

11 And, as David Forrest suggests in his paper "Regulating Nanotechnology Development," available at www.foresight.org/NanoRev/Forrest1989.html, "If we used strict liability as an alternative to regulation it would be impossible for any developer to internalize the cost of the risk (destruction of the biosphere), so theoretically the activity of developing nanotechnology should never be undertaken." Forrest's analysis leaves us with only government regulation to protect us - not a comforting thought.

12 Meselson, Matthew. "The Problem of Biological Weapons." Presentation to the 1,818th Stated Meeting of the American Academy of Arts and Sciences, January 13, 1999. (minerva.amacad.org/archive/bulletin4.htm)

13 Doty, Paul. "The Forgotten Menace: Nuclear Weapons Stockpiles Still Represent the Biggest Threat to Civilization." Nature, 402, December 9, 1999: 583.

14 See also Hans Bethe's 1997 letter to President Clinton, at www.fas.org/bethecr.htm.

15 Hamilton, Edith. The Greek Way. W. W. Norton & Co., 1942: 35.

Bill Joy, cofounder and Chief Scientist of Sun Microsystems, was cochair of the presidential commission on the future of IT research, and is coauthor of The Java Language Specification. His work on the Jini pervasive computing technology was featured in Wired 6.08.


Copyright © 1993-2000 The Condé Nast Publications Inc. All rights reserved.

Copyright © 1994-2000 Wired Digital, Inc. All rights reserved.