Genie nanotech, space colonies, Turing-test A.I., a local singularity, crypto credentials, and private law are all dreams of a future where some parts of the world economy and society have an unusually low level of dependence on the rest of the world. But it is the worldwide division of labor that has made us humans rich, and I suspect we won't let it go for a long time to come.
How can we come to terms with this disagreement? We can of course simply repeat the arguments that persuade us, and point to the flaws we see in counter-arguments. But how sure can we be that we have understood those counter-arguments? How sure can we be that others have not seen flaws in our arguments that we do not see, flaws they have trouble (or do not bother) articulating to us? After all, we certainly have trouble articulating our arguments to them.
One favorite response to this predicament is to invoke common biases in human reasoning. One might say that wishful thinking convinces people that future change will be modest, to justify their habit of not thinking about such change. Or one might say that people have trouble really believing in the fundamentally mechanical nature of the universe, including their minds, and so neglect the implications of such facts. Critics, however, can in turn explain many of our beliefs as due to common human biases. Belief in cryonics, for example, may be attributed to wishful thinking.
Since we are human, we must acknowledge that we too suffer from any common human biases. Of course we might happen to be better than our critics at overcoming the most important biases on this range of topics, but we should not jump to this conclusion too easily. After all, our critics probably tell themselves something similar. So before we can justifiably conclude that we are less biased, we should therefore try very hard to look for biases on our own reasoning.
Which brings me to the topic of this paper. I propose to have identified an important common bias on "our" side, i.e., among those who expect specific very large changes. Once we have corrected for this bias, we may still expect many forms of dramatic change; it is just that in some ways these changes are not quite as dramatic as we might have otherwise expected.
Specifically, my claim is that futurists tend to expect an unrealistic degree of autarky, or independence, within future technological and social systems.
The cells in our bodies are largely-autonomous devices and manufacturing plants, producing most of what they need internally. Our biological bodies are as wholes even more autonomous, requiring only water, air, food, and minimal heat to maintain and reproduce themselves under a wide variety of circumstances. Furthermore, our distant human ancestors acquired tools that made them even more general, i.e., able to survive and thrive in an unusually diverse range of environments. And the minds our ancestors acquired were built to function largely autonomously, with only minor inputs from other minds.
The generality and autonomy of human tribes did come in part at the expense of the autonomy of individual humans within a tribe. Within a group of a few tens of people, there arose a division of labor that made the mind and body of each person more dependent on other group members. But for most humans until a few centuries ago, small tribes themselves were quite autonomous.
Within the last few millennia, and especially in the last few centuries, however, people have become vastly more dependent on one another. In larger towns and cities, people filled very specialized production roles, and exchanged the specialized goods with many others across town. More recently, town and cities have themselves specialized, exchanging goods first across regions, and now across the world. Goods sold in one place are typically made far away, from parts made in other far away places, from materials found in yet other far away places. The cost of this transportation seems extravagant, but that cost is vastly outweighed by the benefits of having localities specialize in particular items. Specialists who are grouped together are far better at improving their specialty. In fact, these benefits of specialization are so enormous that they have made the modern world rich beyond our ancestor's dreams.
This new interdependence is quite alien to our minds, however, which evolved under very different circumstances. Most people are not very aware of, and so have not fully to terms with their new inter-dependence. For example, people are surprisingly willing to restrict trade between nations, not realizing how much their wealth depends on such trade. I thus propose that futurists commonly neglect this interdependence when considering the consequences of future technologies. Perhaps unconsciously, they picture their future political and economic unit to be the largely self-sufficient small tribe of our evolutionary heritage.
Let me illustrate for you with some examples.
For the foreseeable future, however, the relation between space and Earth will be nothing like the relation between Europe and the Americas a few centuries ago. Farming communities back then were largely self-sufficient, and the American environment was similar enough to the European environment to allow farming technologies to be easily transferred. And substantial colonization didn't occur until the cost of moving from Europe to the new world was comparable to the cost of moving within Europe.
Today the costs of transporting material to and from orbit are much larger than the costs of moving material around on Earth, and few Earth technologies transfer easily to space. Those who imagine space colonization anytime soon have thus had to imagine space economies almost entirely self-sufficient in mass and energy. While bits could be exchanged in great numbers with Earth, a space colony could only import a tiny fraction of its physical inputs from Earth. This stands in sharp contrast to even the most isolated existing Earth economies, which share an atmosphere and biosphere with the rest of us, and import and export much more mass. It would be easier to create self-sufficient colonies under the sea, or in Antartica, yet there seems to be little prospect of or interest in doing so anytime soon. Spacecraft to create colonies around other stars require far more self-sufficiency packaged into a much smaller mass.
In Engines of Creation, Eric Drexler's initial descriptions of nanotechnology described tiny self-reproducing manufacturing plants, much like bacteria, able to wander the landscape and convert a wide variety of materials into food. Drexler later backed off from this scenario, instead favoring refrigerator-sized general-purpose factories, such as those described in the science fiction novel Diamond Age. Analogous to the way a general-purpose computer can run any software, general purpose nanofactories would produce all standard consumer items from software designs and a few standard chemical feedstocks.
Even this scenario, however, imagines manufacturing plants that are far more independent than in our familiar economy. Instead of our current world wide economic and transportation web connecting plants, mines, dumps, and consumers, there would instead be only be a movement of feedstocks, bits, and consumers themselves between small "genie" factories. These factories would even be capable of reproducing themselves.
At present we spend about 15% of GDP on manufacturing overall, and about 5% of GDP on physical capital for manufacturing. Nanotechnology only directly promises to make this physical capital for manufacturing very cheap. But genie nanofactories promise much more, namely to make cheap all of manufacturing besides raw materials and basic product design. To achieve this we need not just nanotechnology, i.e., control of matter at the atomic level, but also the complete automation of the manufacturing process, all embodied in a single device. The generic genie factory would be able to take any design specifying which atoms went where, and from that construct a manufacturing plant capable of producing such items, complete with quality control, waste management, and error recovery. This requires "artificial intelligence" far more advanced than we presently possess. Atom-level control of matter is far from sufficient to produce genie nanotech.
People have for centuries imagined creating mechanical devices able to reproduce the important behaviors of the human mind. Besides space colonization, this is probably the most common idea explored in science fiction. Soon after the development of general-purpose computers a half-century ago, the academic field of "artificial intelligence" was created to pursue the goal of creating a computer able to pass the "Turing test," i.e., able to fool someone talking to it from a distance into thinking it was a human. The founders of this field predicted success by now, but we are still a long way away. While this academic field deserves credit for many advances in computer automation, it has also long given up on attempting to construct a general intelligence anytime soon.
In today's economy, knowledge is embodied in human-created software and hardware, and in human workers trained for specific tasks. Knowledge embodied in small software and hardware modules tends to be cheap to use and copy, but is expensive to adapt to changes, since it depends greatly on its context. Human-embodied knowledge is, in contrast, expensive to use but greatly valued for its adaptability, which comes from the broad "common sense" knowledge we each contain. It has so far been difficult to usefully embody human style breadth of knowledge in software, although attempts have been made, such as the CYC project. In practice it has usually been cheaper to leave the CPU and communication intensive tasks to machines, and leave the tasks requiring general knowledge to people.
Turing-test artificial intelligence instead imagines a future with many large human-created software modules, each with broad common sense knowledge and mental abilities, combining the adaptability of human minds with the low cost of machines. It thus imagines software that is far more independent, i.e., less dependent on context, than existing human-created software. We may achieve this goal by directly creating machine copies of human minds, i.e., by creating "uploads." The prospects for sucess by other approaches anytime soon, however, are not encouraging.
Our familiar world economy grows together, however, with innovations and advances in each part of the world depending on advances made in all other parts of the world. The problem of designing smaller chips, for example, keeps getting harder, but a richer world can afford to spend more and more solving this problem. In times past, when different regions were isolated from one another, smaller regions grew more slowly; there are clearly huge advantages to being integrated with other large economies.
Visions of a local singularity, in contrast, imagine that sudden technological advances in one small group essentially allow that group to suddenly grow big enough to take over everything. A small country or large corporation would suddenly and unexpectedly make dramatic advances in some area, advances which would substantially improve their ability to make more advances in this area. If this process continued many times, and if it happened quickly and at-first-quietly enough, such a group might grow strong enough to essentially take over everything before anyone else could stop them.
Local singularity scenarios vary, depending on whether the sudden advances are imagined to be in genie-style nanotech, artificial intelligence, space colonization, or something else. For example, some imagine that a computer program will learn how to improve itself, and do so at an ever increasing rate until it soon can think circles around any human. The key common assumption is that of a very powerful but autonomous area of technology. Overall progress in that area must depend only on advances in this area, advances that a small group of researchers can continue to produce at will. And great progress in this area alone must be sufficient to let a small group essentially take over the world. The world has not yet seen a technology fitting this description.
At present, your relationships with other people depend on each other through your common identity. That is, when deciding how and whether to interact with you, others can talk to each other about you by referring to your "true name." In this way, the reputations we build in one area can benefit or hurt us in other areas. And to the extent we want to keep some relationships private and hence independent of other relationships, we must typically forgo the benefits of being able to refer to those other reputations.
Advances in surveillance technologies may make it much harder to keep our relationships independent. Ubiquitous video cameras and face recognition software, for example, may lead to the creation of detailed databases tracking your activities and dealings. Thus potential employers may soon be able to hear what your third grade teacher thought of you, and what street you walked on last Saturday night. Your next date may be able to look up a review and summary of your last date.
While privacy is becoming harder in the physical domain, there have been great advances in the technologies of digital privacy. Dreams of crypto credentials hope to harness these technologies to improve our privacy. These dreams imagine that many of our relationships will be exclusively digital, and that we can keep these relations independent by separating our identity into relationship-specific identities. All your bookseller might know about you is that you are a good credit risk. And if you only show a potential employer a credential that says you "went to a good school", maybe they can't find out what your third grad teacher thought of you.
It is hard to imagine potential employers not asking to know more about you, however. People like to interact physically, and like to tell their friends and relations about each other. And any small information leak can be enough to allow others to connect your different identities. Thus while crypto credentials may help us preserve privacy in a few areas, they seem unlikely to do more than that.
Visions of private law imagine granting pairs of people far more freedom to choose the laws that govern their interactions. Individuals would choose among competing private legal services, and those services would then negotiate "treaties" between themselves. These treaties would determine all aspects of law and law enforcement. Pairs of legal services without a treaty would be at "war" with one another, and only the law of the jungle would govern member interactions. The fear of this prospect is thought to encourage people to choose legal services with treaties, and thus encourage those services to agree to treaties.
There would be little point to private choice of tort or criminal law, which influence the care with which we treat strangers, if we did not know which law covered the strangers we come into contact with. Thus visions of private law often imagine people wearing pins or something similar to show their legal association. It is not clear how far this sort of thing can go, however, to make the interaction between two people depend only on the treaty between their legal services. To the extent that this interaction also depends on other treaties, this pair of people may want to have a say in those other treaties. But if everyone has a say in everyone's law, private law is no more.
Today, people seem very reluctant to allow individual pairs of people to negotiate around standard tort and criminal law, and even to violate common features of contract law. This may be due to simple prejudice and misunderstanding, but it may also be due to a recognition of our substantial legal interdependence. It may also be due to a reasonable fear that legal services would collude instead of competing, breaking up territory into exclusive areas to exploit, as has often happened with private "protection agencies."
In this sense of positing an extreme independence (autarky, autonomy, etc.) these dreams bear a striking resemblance to many other dreams of the future, such as from anarchists and greens seeking small communities which are biologically, politically, economically, or culturally self-sufficient. And come to think of it, most utopias have been described as isolated islands or valleys where folks can do things right.
These dreams of autarky also seem related to complaints about the great specialization in modern academic and intellectual life. People complain that ordinary folks should know more science, so they can judge simple science arguments for themselves, rather than relying on expert opinion. Similarly, many want policy debates to focus on intrinsic merits, rather than on appeals to authority. Many people wish students would study a wider range of subjects, and so be better able to see the big picture. And they wish researchers weren't so penalized for working between disciplines, or for failing to cite every last paper someone might think is related somehow.
It seems to me plausible to attribute all of these dreams of autarky to people not yet coming fully to terms with our newly heightened interdependence. Biology created relatively-autonomous creatures, and only recently discovered a worldwide division of labor in us. So our cells are largely-autonomous manufacturing plants, our mental software is general and broadly capable, and we picture our ideal political unit and future home to be the largely self-sufficient small tribe of our evolutionary heritage.
High levels of autarky are not physically impossible, and hence all of the dreams of autarky described do seem to be possible at some abstract level. The question, however, is whether people will choose to create them, i.e., whether their benefits will be thought to outweigh their costs.
I suspect that future software, manufacturing plants, and colonies will typically be much more dependent on everyone else than dreams of autonomy imagine. Yes, small isolated entities are getting more capable, but so are small non-isolated entities, and the later remain far more capable than the former. The riches that come from a worldwide division of labor have rightly seduced us away from many of our dreams of autarky. We may fantasize about dropping out of the rat race and living a life of ease on some tropical island. But very few of us ever do.
So academic specialists may dominate intellectual progress, and world culture may continue to overwhelm local variations. Private law and crypto-credentials may remain as marginalized as utopian communities have always been. Manufacturing plants may slowly get more efficient, precise, and automated without a sudden genie nanotech revolution. Nearby space may stay un-colonized until we can cheaply send lots of mass up there, while distant stars may remain uncolonized for a long long time. And software may slowly get smarter, and be collectively much smarter than people long before anyone bothers to make a single module that can pass a Turing test.
Interdependence doesn't seem romantic, and so naturally isn't the basis for many science fiction novels. But it may be the future we need to come to terms with.