Here are my comments on the commentary of

I want to thank everybody for writing, and Robin Hanson for putting things together. The discussants were insightful and persuasive. In responding, I often made conjectures and scenarios. Sometimes this was in support of a discussant's point, sometimes in opposition. Taken together, my conjectures and scenarios are not consistent (and this is a natural characteristic of scenarios), so perhaps I should point to my starting place:

  1. The creation of superhuman intelligence appears to be a plausible eventuality if our technical progress proceeds for another few years.
  2. The existence of superhuman intelligence would yield forms of progress that are qualitatively less understandable than advances of the past.
Thanks very much,
-- Vernor Vinge


Comment on Gregory Benford's commentary:

It would be interesting to see how the "resistive term" would manifest itself. There may be problems we do not guess, while at least some of the things we regard as immutable barriers may simply be misconceptions on our part.

I certainly agree that my version of a singularity does not imply that anything become "infinite".

The only point where I have a disagreement with Greg is in his last sentence. It seems plausible to me that the transcendent critters would soon have their own infrastructure, and would not be especially dependent on the stay-behinds.


Comment on David Brin's commentary:

(An aside: David Brin give the historical context of techno-transcendentalism. I suspect that I got into this area because of my hope that logic and technology would lead to conclusions that fit some kind of inner needs. Whatever the psychological/emotional background of a thinker, it's important to be aware of the background -- or at least to be aware that there is a background. Reason and objectivity are extremely important, but any researcher/thinker who denies internal bias is at least self-deceiving.)

As David does here, I like to put thinking about the future in terms of scenarios. I don't know what the future holds, but playing with plausible scenarios is fun -- and it can give guidance as to symptoms to watch for and safest strategies. David lists three scenarios:

"1. Achieve some form of 'singularity' -- or at least a phase shift, to a higher and more knowledgeable society (one that may have problems of its own that we can't imagine.)"
My version of (1) is the technological Singularity that I've been talking about. It is a limited notion (see my overall comments at the beginning of these responses). Unfortunately, it does not preclude side-effects such as the destruction of humanity.
"2. Self-destruction"
Alas, self-destruction appears to be quite a plausible scenario to me. Many people look at the 1990s as the dawn of a more peaceful age, but in retrospect it may be a short pause ... before weapons of mass destruction become very widely available. Intelligence as a self-lethal adaptation would explain Fermi's Paradox better than the Singularity.
"3. Retreat into some form of more traditional human society. One that discourages the sorts of extravagant exploration that might lead to results 1 or 2."
(3) is very like Gunther Stent's scenario. I suspect such a world would have its ups and downs (intermediate destruction, fallen civilizations). And somehow the prospect of 10000 years of legacy software is almost as intimidating as anything we are talking about :-)

David discusses several models of super-intelligence: Teilhard de Chardin's, David's in Earth. I see additional models from the other discussants in this sequence. I'll try to point to these as I go along. (I've also attached a little essay at the end, describing one of my candidates.)


Comment on Damien Broderick's commentary:

Damien Broderick starts his commentary with a quote from my 1993 essay about the Technological Singularity. That quote does show the focus of my notion of technological singularity, which comes down to two points:

  1. The creation of superhuman intelligence appears to be a plausible eventuality if our technical progress proceeds for another few years.
  2. The existence of superhuman intelligence would yield forms of progress that are qualitatively less understandable than advances of the past.
Point 1 is something that has often been speculated -- for perhaps some era a million years from now. At that remove, it is a comforting goal. The notion that it may happen in the near historical future is unsettling. (And, as Damien says: "It's a pity the timing coincides fairly closely with the dates proposed by the superstitious.")

Damien cites three varieties of singularity (due to Anders Sandberg). The notion of "wall" or "prediction horizon" is part of what I wanted to convey with the term "singularity" (ie, by metaphor with the use of the term in general relativity, which is no doubt from the use in math and mathematical modelling, the idea of a place where a smooth simple model fails and some more complicated description is necessary). In particular, I don't claim that the technological singularity implies that anything becomes infinite.

Damien writes:

"Yet many people, to my amazement, denounce the idea that we might live indefinitely extended and ceaselessly revised lives in a post-Spike milieu."
I agree with his defense and counter-attack against this point of view. At the same time, I think the argument he discusses here illustrates a fundamental issue: We can now imagine technology so powerful that it may be able to give us extreme goods such as immortality and artificial intelligence. Contemplating these possibilities leads to close inspection of things like self-awareness and mortality. I find that the closer I look at such things, the more analytical the discussion becomes ... the more diffuse and elusive the objects become. There was a time (and it is still the time for most people), in which self-awareness was the most concrete thing in life (the one thing "I" know). If we get the tools to attack these issues directly, I think we are in for some big surprises. (See the appended essay for one angle on this.)

Radical optimism has apocalyptic endpoints -- even if there are no hidden "monkey's paw" gotchas. It is interesting that the prospect of immortality leads to many of the same problems as increased intelligence. I could imagine living a thousand years, even ten thousand. But to live a million, a billion? For that, I would have to become something greater, ultimately something much, much greater. I don't object! I'm just saying that the goal is not quite what it might have seemed in the past.


Comment on Nick Bostrom's commentary:

Nick Bostrom brings out the idea that the singularity is just one possible scenario. This is really how I like to look at it (and as he says, I do think The Singularity is one of the more likely scenarios).

To me, what Nick calls "Verticality" is just a plausible side effect of the creation of superintelligence. This is mostly by analogy with past progress:

(Actually, I imagine (a frail and indefensible conjecture) one or two phases beyond this: )

I agree with Nick's point that nanotechnology without superintelligence would not lead to a singularity. Our tools and our design problems would become so complex that progress as a whole would level off (Gunther Stent's golden age).

Nick makes a case for the intelligibility (to us humans) of superhumans. A number of the discussants disagree with the "unknowability" claim I make about superhumanity. Let me make some comments about that:


Comment on Alexander Chislenko's commentary:

Sasha writes:

"It seems quite natural that at every point we pay most attention to the parameters that are just about to peak, as they represent the hottest current trends."
This is true. (The principle also applies to engineering efforts. For instance, we try to alleviate poor progress in some technical areas (battery life, for example) with the big wins we are achieving in other areas (logic hardware).)

On the other hand, it seems to me that the plausible connection between powerful computation and the creation of superhuman intelligence gives some external, objective excuse for giving special importance to hardware development trends. (If we do not achieve superhuman intelligence, we will find other things to be enthusiastic about, but progress will proceed along circumscribed paths. This is the line that I'm playing with in the science-fiction stories that take place in the "Slow Zone". Lots of neat things happen, but all our technical progress saturates in a few hundred years, and humankind looks back at our present time as the "era of failed dreams".)


Comment on Robin Hanson's commentary:

In my past conversations about the technological singularity, the most frequent criticism (or credibility problem) has been about the possibility that we can make artificial, human equivalent minds. It's interesting that among the present discussants the most frequent criticism is about my claim (conjecture) that superhumans would be unknowable. At the very least, I'm claiming that progress and tech in a superhuman world would be qualitatively different (and less knowable) than in the past. I think this mild form of the claim is still very credible.

As for that goldfish...
An important point in Robin's reasoning is the contention that once a critter has a human level of cognition, it could understand higher levels at least in some general way (though perhaps slowly).

I certainly do not have any killer argument to counter this. Let me run through some points that give me an intuition against it.

Of course, several of the points I've just made depend on assumptions about what superhumanity would be like (in particular analogies with distributed systems). Sorry!

Even though my intuition still makes me support the strong version of the unknowability claim, over the last five or ten years I've come to see greater and greater plausibility for a more level-headed view of superhumanity, similar to what Robin discusses here.

There have been two big influences that have caused this change in my thinking:

So maybe the self-aware part of a superhuman would not be a greater than human intellect. This situation is fairly imaginable. It's an extension of our own situation: the part of us that is self-aware is probably a very small part of everything that is going on inside our minds. We depend on our non-conscious facilities for many things. (It's amusing that some of these non-conscious facilities, such as creativity, are also cited as things that differentiate us from the "nonsentient" world of machines.)

So we might get a creature that would be a lot like a human, but with extraordinarily good "intuition". The creature would be coordinating and correlating vast amounts of information, but not in its top-level consciousness. Looking at it from the point of view of Marvin Minsky's Society of Mind, the creature would possess an internal society extraordinarily more complicated than our own. At the top level, there might something that actually makes decisions on the basis of what comes in from the lower level agents. That apex agent itself might not appear to be much deeper than a human, but the overall organization that it is coordinating would be more creative and competent than a human.

At the end of his commentary, Robin says:

"Yet, his "unknowable" descriptor has become a mental block preventing a great many smart future-oriented people from thinking seriously beyond a certain point. As Anders Sandberg said, it ends "all further inquiry just as Christians do with `It is the Will of God'." Many of Vinge's fans even regularly rebuke others for considering such analysis. Vinge, would you please rebuke them?"
Despite my intuition about unknowability, I do like to speculate about what superhumans might be like. It is fun, and if we have a soft take-off, it is a matter of life and death that we do so. (With my Zones universe, I've tried to give myself a playing field where I can actually play with the different possibilities.)

Comment on Peter C. McCluskey's commentary: Yes, probably the greatest weakness of the tag "singularity" is that it implies that something is going infinite, which I certainly did not mean. (See my comment in the Damien Broderick section about what I liked about the term "singularity".)

I like the tag "the transcendence". It certainly avoids the "blowing up to infinity" interpretation. (Perhaps its greatest weakness is that that transcendance is sometimes associated with religious transformations.)


Comment on Max More's commentary:

Max states and critiques the following two assumptions:

"Assumption #1: If we can achieve human level intelligence in AI, then superintelligence will follow quickly and almost automatically."

"Assumption #2: Once greater than human intelligence comes into existence, everything will change within hours or days or, at most, a few weeks. All the old rules will cease to apply."

As for Assumption #1, Max finds the possibility of human level AI to be relative plausible, but the possibility of creating superintelligence is much more remote. This also came up when he and Natasha More interviewed me last year -- see my comments in Robin Hanson's section. As Max says, my experience has been that most people find the possibility of human-level machine intelligences to be more incredible than "Assumption #1". Since that conservative point of view hasn't had much play in these commentaries, let me take a few words to talk about a scenario where we never even achieve human equivalent machine intelligence: Pretend it is 2050, and none of the events we've been talking about ever happened, and you are charged with writing an essay about why it was obvious all along that such events could not happen. (In such a situation, I'm sure that most analysts will say it was obvious all along :-)

There are several possibilities for such a technical failure. I think I list them in my singularity essay. Nowadays, the barrier to progress that seems to most likely to me is what might be called "Murphy's Counterpoint to Moore's Law":

"The maximum possible effectiveness of a software system increases in direct proportion to the log of the effectiveness (ie, speed, bandwidth, memory capacity) of the underlying hardware."
(Note, I am not claiming this as true principle. And as stated it is a bungle, since I don't give any hint of a quantitative meaning for "effectiveness of a software system". On the other hand, by 2050, in this singularity-free scenario, I'm sure the analysts can come up with some more acceptable statement of Murphy's Counterpoint.)

In this scenario, the Y2K problem is just the first famous example of the "nibbled to death by rats" problem of trying to coordinate serious distributed computing. In this scenario, every time we try to implement large, complex systems, we go down in flames (add your favorite list of system fiascos here ... it is the prolog to the debacles of the following decades). The wise analysts of 2050 look back on these system failures and explain them in the abstract, as situations where system designers tried to exceed the limit of Murphy's Counterpoint.

Here is a case where comparing scenarios can lead to some concrete symptoms to watch for. Of course, continuing, painful system disasters indicate at least the possibility that we are heading into the Murphy's Counterpoint scenario. But some such disasters will no doubt happen even if we are headed toward the Singularity. A better thing to watch might be theoretical progress in programming methodologies. (Here follows more-than-usually-errant speculation, guided by my intuition about what endpoints look like:) If the creation of systems continues to be regarded as "software engineering", if our methodologies continue to be deterministic, then I'd bet on Murphy's Counterpoint. If, for very powerful hardware systems, we begin to see non-deterministic, biological approaches to getting desired performance, then we are likely headed for The Singularity ... and ten or fifteen years from now the most powerful systems may best be described in animal (or at least biological) terms. "But what about critical systems?" Would anyone want nondeterminism, animal unpredictability, in a life-critical system? I think the answer is "yes", if the system is to be extremely large and complex. For such large systems, the biological paradigm will probably be much more reliable engineering paradigm. And in fact, we have depended on animal unreliability for life-critical applications in the past (cf Lassie).

So the rise of biological paradigms in large system development would be a striking and important symptom, a powerful clue as to the direction that things are headed.

In connection with Max's discussion of Assumption #2 (the speed of transition): Earlier in this sequence I give my reason for thinking the transition will be superfast. A slow transition (soft takeoff) would be ever so much safer, and I hope for one.


Comment on Michael Nielsen's commentary:

Michael Nielsen makes several good points for the conclusion that Dominant AI might not occur. His analogy of the relationship of humans with bacteria is especially interesting. Bacteria really are undominated (in fact, it could certainly be argued that the Bacteria Rule!). It seems plausible that humans and superhumans could co-existence in the same area. In fact, it's even possible that we might not recognize that the creation was an intelligence at all, but just another force of nature (eg, the Zones in A Fire Upon the Deep). There is also the possibility that the most desirable real estate is different for superintelligences than for humans. Maybe they prefer the higher data rates possible on the surface of a neutron star. (Unfortunately, there might still be second and third rate superhumans stuck in our domains.) Hans Moravec has a soft-takeoff scenario in which old-style humans get the earth, but hardball competition rules elsewhere. (Hopefully, this scenario will be described in Hans's new book Robot: Mere Machine to Transcendent Mind, Oxford University Press, Fall 1998.) Somewhere in the past I've said (or maybe I put the claim in the mouth of a character (which is always safer)) that a characteristic of the post-Singularity world might be that any Human-statable goal whose success can be objectively measured would be attainable. (The next step in the argument was the rhetorical question: "So, what do you do after that, Human?") I think Michael gives some good examples to show that there are clear goals that are unattainable. On the other hand, extreme successes are possible:


Comment on Mitchell Porter's commentary:

Mitchell Porter suggests a systematic approach to scenario building: the identification of a number of assumptions and uncertainties that would generate a scenario matrix.

In some scenario work there is power in concentrating on a small number (3 or 4, with some wildcard scenarios). For something like the Singularity (or non-Singularity, as Mitchell says), the "spreadsheet" approach is attractive. (And for a science-fiction writer, almost every cell would have stories in it :-).

More errant speculation about superintelligence:


Comment on Anders Sandberg's commentary:

Anders Sandberg makes a good point that the term "singularity" can reasonably be use for in a number of different ways related to technical progress. (The first such reference I know of was by von Neumann.) My use of the term is limited to one particular form of progress and the consequences of that progress.

About technological diffusion:
Anders shows a number of different aspects and possibilities of this. My intuition is that geographical third-world considerations are going to become a good deal less important. We might even find that we end up with something like a "third-world" but spread all through the world, perhaps even city block by city block! (Of course, that implies acute social strains.)

In the past, some of the most advanced geographical areas were at a disadvantage because they had an investment in obsolete infrastructure that they were reluctant to trash. In such a situation, those who were behind could adopt the new, best infrastructure without additional pain. I doubt if this aspect of diffusion is going to be a major discriminator in progress of the next thirty years, partly because I believe we are moving into an era of "instant" or "throw-away" infrastructure. The new technologies will be so superior and so easy to install, that adoption will be forced even where there is mega- investment in old ways. (Of course, this may be seen as an enormous disaster by the owners of the old infrastructure.)

There are symptoms for my version of the singularity. I pointed to one above, in connection with software complexity. Another symptom that the Singularity is truly developing would be that the most effective research entities do not suffer crippling shortages of (Anders words:) "the scientists, engineers, businessmen and other people who will make growth even faster". And if we begin to see true, persistent technological unemployment (not just job slots moving to new sectors) that would be a similar symptom. In such a slide into the Singularity, the familiar complaint about "technology for the sake of technology" takes on new meaning; limitation arguments based on economic return would have to be made with an eye to just who (or what) is getting the economic return.

Anders talks briefly about the possibility of technological loss and physical disasters. In recent years, many people seem to feel nuclear disaster is unlikely. I fear that the Nineties may be regarded as a short, blissfully ignorant period. The weapons of mass destruction remain with us, but now we may see them used by smaller groups (and superstate confrontations are still very plausible). But I agree with Anders' analysis that such destruction would probably have little effect on the tech progress we are seeing. In fact, one of the few really deadly developments would be a hard take-off during an arms race. (Imagine two nation states with military super-intellect programs, both sides convinced of that terminal victory comes from being the first to achieve super-intelligence. There's a grim little story here, each side spying on the other with better and better technology, the arms race telescoping down to final hours where very big chances are taken. That might be a real extinction event.)


Comment on Damien Sullivan's commentary:

I don't have an expectation that superhumans could do things like figure out first causes or compute effectively with infinite numbers. (The notion that they might figure a way of "leaking out" is in the same general realm. To me, such things are in a different category of progress -- though they no longer seem starkly impossible to me. Some of Greg Egan's stories about quantum mechanics have that aspect of being orthogonal to previous notions and measures of progress. I was brought up short a few years ago (partly from reading Egan's) stuff -- to realize that my form of optimism might be homey and conservative.)


Comment on Eliezer S. Yudkowsky's commentary:

I felt great emotional resonance with Eliezer Yudkowsky's comments. The knowledge of my bias should make me very careful in evaluating the conclusions. For instance, Eliezer makes an eloquent case that not only is the Singularity a Good Thing, but it is also darn near inevitable. Yes, yes! I want to say. But I also notice the power of pairing the "Good Thing" analysis with the "Inevitable Tide of Time" conclusion. This classic combo has been the meme-template driving many of the most successful persuasive idealogies -- independent of the truth of the underlying claims.

But ... sometimes the combo assertion is true! We must watch events as they actually unfold, comparing them with the scenarios we have imagined, and participate as constructively as we can in further developments.


I've attached a post-Singularity scenario I did for the 1996 British national science fiction convention (Evolution). This gives one "possible flavor of unknowability" [sic :-] and also has some of the optimism of Eliezer's comments.


Nature, Bloody in Tooth and Claw?

(c) 1996 by Vernor Vinge

The notion of evolution has frightening undertones. The benevolent view of Mother Nature in many children's nature films often seems a thin facade over an unending story of pain and death and betrayal. For many, the basic idea behind evolution is that one creature succeeds at the expense of another, and that death without offspring is the price of failure. In the human realm, this is often the explanation for the most egregious personal and national behavior. This view percolates even into our humor. When someone commits an extreme folly and is fatally thumped for it, we sometimes say, "Hey, just think of it as evolution in action."

In fact, these views of evolution are very limited ones. At best they capture one small aspect of the enormous field of emergent phenomena. They miss a paradigm for evolution that predates Lord Tennyson's "bloody in tooth and claw" by thousands of million years. And they miss a paradigm that has appeared in just the last three centuries, one that may become spectacularly central to our world.

Long before humankind, before the higher animals and even the lower ones, there were humbler creatures ... the bacteria. These are far too small to see, smaller than even the single-celled eukaryotes like amoebas and paramecia. When most people think of bacteria at all, they think of rot and disease. More dispassionately, people think of bacteria as utterly primitive: "they don't have sex", "they don't have external organization", "they don't have cellular nuclei".

Certainly, I am happy to be a human and not a bacterium! And yet, in the bacteria we have a novelty and a power that are awesome. At the same time most folk proclaim the bacteria's primitive nature, they also complain of the bacteria's ability to evolve around our antibiotics. (And alas, this ability is so effective that what was in the 1950s and 1960s a medical inconvenience is becoming an intense struggle to sustain our antibiotic advantage, to avoid what Science magazine has called the "post anti-microbial era".) The bacteria have a different paradigm for evolution than the one we naively see in the murderous behavior of metazoans.

The bacteria do not have sex as we know it, but they do have something much more efficient: the ability to exchange genetic material among themselves -- across an immensely broad range of bacterial types. Bacteria compete and consume one another, but just as often both losers and winners contribute genetic information to later solutions. Though bacteria are correctly called a Kingdom of Life, the boundary between their "species" is nearly invisible. One might better regard their Kingdom as a library, containing some 4000 million years of solutions. Some of the solutions have not been dominant for a very long time. The strictly anaerobic bacteria were driven from the open surface almost 2000 million years ago, when free oxygen poisoned their atmosphere. The thermophilic bacteria survive in near-boiling water. Millions of less successful (or currently unsuccessful) solutions hide in niches around the planet. The Kingdom's Library has some very musty, unlit corners, but the lore is not forgotten: the Kingdom is a vast search and retrieval engine, creating new solutions from the bacteria's ability for direct transfer of genetic information. This is the engine which we with our tiny computers and laboratories are up against when we talk airily of "acquired antibiotic resistance". For the bacteria, evolution is a competition in which little is ever lost, and yet solutions are found. (I recommend the books of Lynn Margulis for a knowledgeable discussion of this point of view. Margulis is a world-class microbiologist whose writing is both clear and eloquent.)

For the most part, we metazoans have a strong sense of self. More, we have a very strong sense of boundary -- where our Self ends and the Otherness begins. It is this sense of self and of boundary that makes the process of evolution so unpleasant to many.

The bacterial Kingdom continues today. It has been stable for a very long time, and will probably be so for a long time to come. It has its limits, ones it seems unlikely ever to transcend. Nevertheless, I find some comfort in it as an alternative to the conflict and pain and death we see in evolution among the metazoans. And many of of the bacteria's good features I see reflected in a second paradigm, one that has risen only in the last few centuries: the paradigm of the human business corporation.

Corporations do compete. Some win and some lose (not always for reasons that any sensible person would relate to quality!), and eventually things change, often in a very big way. Unlike bacteria, corporations exist across an immense range of sizes and can be hierachical. As such, they have a capacity for complexity that does not exist in the bacterial model. And yet, like bacteria, their competition is mainly a matter of knowledge, and knowledge need never be lost. Very few participants actually die in their competition: the knowledge and insight of the losers can often continue. As with the bacterial paradigm, the corporate model maintains only low thresholds between Selves. Very much unlike the bacterial paradigm, the corporate one admits of constant change (up and down) in the size of the Self.

At present, the notion of corporations as living creatures is a whimsy or a legal contrivance (or a grim, Hobbesian excuse for tyranny), but we are entering an era where the model may be one to look at in a very practical sense. Our computers are becoming more and more powerful. I have argued elsewhere that computers will probably attain superhuman power within the next thirty years. At the same time, we are networking computers into a worldwide system. We humans are part of that system, the dominant and most important feature in its success. But what will the world be like when the machines move beyond our grasp and we enter the Post-Human era? In a sense that is beyond human knowing, since the major players will be as gods compared to us. Yet we see hints of what might come by considering our past, and that is why many people are frightened of the Post-Human era: they reason by analogy with our human treatment of the dumb animals -- and from that they have much to fear.

Instead, I think the other paradigms for competition and evolution will be much more appropriate in the Post-Human era. Imagine a worldwide, distributed reasoning system in which there are thousands of millions of nodes, many of superhuman power. Some will have knowable identity -- say the ones that are currently separated by low bandwidth links from the rest -- but these separations are constantly changing, as are the identities themselves. With lower thresholds between Self and Others, the bacterial paradigm returns. Competition is not for life and death, but is more a sharing in which the losers continue to participate. And as with the corporate paradigm, this new situation is one in which very large organisms can come into existence, can work for a time at some extremely complex problem -- and then may find it more efficient to break down into smaller souls (perhaps of merely human size) to work on tasks involving greater mobility or more restricted communication resources. This is a world that is fightening still, since its nature undermines what is for most of us the bedrock of our existence, the notion of persistent self. But it need not be a cruel world, and it need not be one of cold extinction. It may in fact be the transcendent nature dreamed of by many brands of philosopher throughout history.