Chalmers is right: we should expect our civilization to, within centuries, have vastly increased mental capacities, surely in total and probably also for individual creatures and devices. We should also expect to see the conflicts he describes between creatures and devices with more versus less capacity. But Chalmers’ main prediction follows simply by extrapolating historical trends, and the conflicts he identifies are common between differing generations. There is value in highlighting these issues, but once one knows of such simple extrapolations and standard conflicts, it is hard to see much value in Chalmers’ added analysis.Introduction
David Chalmers says academia neglects the huge potential of an intelligence explosion:
An intelligence explosion has enormous potential benefits: a cure for all known diseases, an end to poverty, extraordinary scientific advances, and much more. It also has enormous potential dangers: an end to the human race, an arms race of warring machines, the power to destroy the planet. So if there is even a small chance that there will be a singularity, we would do well to think about [it]. (p.10)Apparently trying to avoid describing the scenario that interests him in too much speculative detail, Chalmers goes far in the other direction, offering rather weak descriptions of his key scenario and the issues that concern him about it. Such weak descriptions do help highlight important issues to wider audiences, but they are too weak to offer much added insight to expert understanding.
That is, we should expect that a simple continuation of historical trends will eventually end up satisfying his description of an “intelligence explosion” scenario. So there is little need to consider his more specific arguments for such a scenario. And the inter-generational conflicts that concern Chalmers in this scenario are generic conflicts that arise in a wide range of past, present, and future scenarios. Yes, these are conflicts worth pondering, but Chalmers offers no reasons why they are interestingly different in a “singularity” context.
To make my point clear, I will first review what we should expect as the simple future continuation of prior historical trends, and then review some of the conflicts that commonly arise between generations, which should also arise in this default future scenario. Finally I will show that both Chalmers’ singularity concept and the conflicts he identifies are already contained within this default scenario.
Our clearest and most dramatic long-term historical trend has been a vast and broad growth in our capacities. Primates evolved into humans who developed farming and then industry. Over this long development, we have accumulated capacity-enhancing innovations first in our genes, and later also in our culture and social and physical environments. For almost any task one can imagine, humanity is now collectively far more able to achieve that task. This vast increase in our capacity has enabled a vast increase in the range of our environments and activities, in our population, and lately in our individual lifespan and consumption.
Many of our increased capacities are mental (or computational). Such capacities help us to choose, calculate, infer, talk, answer, summarize, compose, etc. We generally say that machines, animals, people, teams, organizations, cities, or nations are “intelligent” when they display relatively high capabilities across a wide range of mental tasks. In this sense humanity and the civilization it has spawned have clearly become far more “intelligent” than before.
Even when we devise measures like IQ, intended to describe the shared correlation among the mental capacities of individuals, when outside assistance is excluded, we still find that we are getting smarter. For roughly a century we have seen in rich nations a “Flynn effect” of IQ scores increasing each decade by an average of three points, relative to a one hundred point average. But while individual humans are improving, for the last few centuries the capacities of our machines have been improving at an even faster pace.
Not only have our capacities greatly increased, we have also seen large increases in our rate of capacity growth. For example, the human population grew mainly because it was able to exploit more ecological niches more thoroughly. It took roughly a million years for the human population of foragers to grow from ten thousand to ten million, doubling roughly every hundred thousand years. Then during the farming era, from about five thousand to three hundred years ago, the population of farmers (and the economic capacity to support them) doubled roughly every thousand years. And over the last century of industry the world economy has doubled about every fifteen years.
If a new growth era was to grow as fast compared to industry as industry grew compared to farming, or farming grew compared to foraging, the new economy might double every few months or less. And since the industry era seems to have already seen more capacity doublings than the forager and farming eras, then if the number of doublings were at all comparable across eras, if a new growth era is coming it should start within a century or so.
Thus if historical trends continue, we should expect our civilization to continue to gain in capabilities across a wide range of tasks. Our civilization as a whole, and probably also many individual machines or creatures within it, will become much more “intelligent.” We should also expect, if perhaps more weakly, that our machines will improve more rapidly than our biology, and that the rate at which such capabilities improve will also increase. Yes historical trends need not continue, there may be fundamental limits to some capacities, and the price we pay to increase some capacities may be the reduction of other capacities. But overall we should expect the future to see more intelligence, perhaps especially in machines, and perhaps increasing faster.
Those born in different historical eras have many common causes to unite them, and many ways to assist one another. Even so, there are also some ways in which generations often find themselves in conflict. Generations that overlap in time can conflict over resources, and generations can have preferences about the behavior of distant generations.
The simplest conflict is that generations can exist nearby in time, and so compete for natural and social resources. For example, when creatures begin a life cycle with a very low capacity, as human children do, then new generations are initially at the mercy of older generations. At the other end of a life cycle, generations can also find themselves in weak negotiating positions, both because capacity falls at the end of life and because later generations tend to have greater capacity than earlier ones. This can allow new generations to treat old ones with less deference than the old prefer. In the extreme, new generations might exterminate old ones.
Natural selection endowed humans with sufficient feelings of empathy for vulnerable infant and elderly kin to perpetuate the species, though this does not prevent killing of infants, elderly or non-kin in severe resource shortages. In modern market economies, people can save during their high-capacity midlife in order to gain promised resources during their later low-capacity “retirement.” In rich nations non-kin usually keep such retirement promises, though they are sometimes diluted by crime, fraud, taxation, hyper-inflation, etc.
The second common conflict is when older generations have preferences over the behavior of younger generations. Older generations may want younger ones to remember and honor them, or to remain loyal to their clans, religions, nations, customs, or social/moral norms. However, the genetically and culturally embodied behaviors of younger generations may change via random drift or adaptation to changing circumstances. Cultural changes have been especially rapid recently and most of our ancestors of a few centuries ago would probably dislike many of the ways that we have changed our loyalties since then.
When two generations are alive at the same time, many of their value conflicts can be dealt in ways similar to resource conflicts. But once an older generation is dead, other approaches may be required. Older generations often try to indoctrinate younger generations into desired loyalties, but this effect can decay quickly. In modern market economies, the terms of bequests could allow older generations to pay younger generations to preserved desired loyalties, and typical interest rates give older generations enormous amounts to pay with. In practice, however, our law enforces few such terms, out of a distaste for “dead hands” controlling future generations.
Chalmers argues that, absent a huge disaster or a concerted effort to prevent change, “within centuries” we will see “superintelligence,” i.e., artificial intelligence “at least as far beyond the most intelligent human as the most intelligent human is beyond a mouse” (pp.11-13). Conscious that “intelligence” can be ambiguous, he clarifies that he means “capacities that far exceed human levels” in both 1) “capacity to create systems with” a “cognitive capacity that we care about” such as “some specific reasoning ability,” and 2) some correlated “self-amplifying cognitive capacity” (p.23).
But what is meant by intelligence “as far beyond the most intelligent human as the most intelligent human is beyond a mouse”? For the vast majority of mental tasks, the mental capacity of our civilization has already increased greatly over human history – for many mental tasks we are already as far beyond our ancestors of a hundred thousand years ago as they were beyond a mouse.
Perhaps Chalmers has in mind an IQ-like concept for individuals, intended to exclude outside assistance. But it is civilization’s total capacity to accomplish tasks, not the capacity of an isolated individual, that most matters for the consequences he highlights, such as “a cure for all known diseases, an end to poverty, [and] extraordinary scientific advances.” And if our growth rates speed up again, centuries of Flynn-effect-like IQ growth could well be sufficient to meet even the to-us-as-we-are-to-mice standard.
Thus we should expect the simple continuation of long-term historical trends to lead, within a few centuries, to a future containing “superintelligence”; we need no further assumptions.
Loosely speaking, things that originate from us and whose form and details come from and echo us, are our “descendants.” And unless everyone and thing in the future is equally superintelligent, the future should contain both newer “generations” that are more superintelligent, and older “generations” who are less so. We don’t know the mixture of biology and machines that future superintelligences will inhabit, but we can be sure that much of their design and content will descend from and echo us, with differing of our capacities improved to differing degrees.
Chalmers has two main concerns about future superintelligence, both of which can be seen as standard inter-generational concerns. First, he fears “they” might not treat “us” well:
Care will be needed to avoid an outcome in which we are competing [with them] over objects of value. … Systems that have evolved by maximizing the chances of their own reproduction are not especially likely to defer to [us] (p.34).These concerns seems to echo standard concerns of older generations that younger generations treat them well during times when generations overlap. Chalmers says his older generation doesn’t want to go extinct or even compete for resources. Rather than suffer the indignity of being segregated into a retirement community, or of living closely among a higher capacity younger generation, Chalmers prefers his older generation to always have the highest capacity of any generation present.
What is our place within that world? There seem to be four options:  extinction,  isolation,  inferiority, or  integration. … The second option … would be akin to a kind of cultural and technological isolationism that blinds itself to progress. ... The third option … threatens to greatly diminish the significance of our lives. … This leaves the fourth option: … we become superintelligent systems ourselves… If we are to match the speed and capacity of nonbiological systems, we will probably have to dispense with our biological core entirely (p.41).
Chalmers’ other concern is that these younger generations might act on values that differ from those of his generation:
It makes sense to ensure that an AI values human survival and well-being and that it values obeying human commands. … [and] that AIs value much of what we value (scientific progress, peace, justice, and many more specific values). … If at any point there is a powerful AI+ or AI++ with the wrong value system, we can expect disaster (relative to our values) to ensue. … If the AI+ value system is merely neutral with respect to some of our values, then in the long run we cannot expect the world to conform to those values. (pp.34-35)This also seems to echo standard desires of older generations for younger generations to continue to preserve old generation loyalties to clan, religion, customs, social/moral norms, etc.
Chalmers is concerned about future inter-generational conflicts, and he raises such conflict issues in the context of possible future “superintelligences.” I have argued that we should expect a simple continuation of historical trends to lead to a future civilization with greatly increased capacity across a wide range of mental tasks, and that the inter-generational conflicts that Chalmers highlights are generic, the sort of conflicts that arise in a wide range of future scenarios.
I don’t mean to imply that there is nothing new or interesting one might say about future inter-generational conflicts. But interesting contributions should focus on ways in which inter-generational conflicts are interestingly different in some imagined future context.
There do seem to be several promising candidates for plausible differences that might interestingly change the nature of intergenerational conflicts. For example, longer lives or faster growth rates may increase the number of different generations alive at any one time, and the degree to which such generations differ in peak capacity. Direct copying of minds, instead of growing new minds up from baby minds, might ensure that new generation values are more similar to old generation values. Explicit and transparent encoding of values might make indoctrination easier and more reliable.
It seems to me that the most robust and promising route to low cost and mutually beneficial mitigation of these conflicts is strong legal enforcement of retirement and bequest contracts. Such contracts could let older generations directly save for their later years, and cheaply pay younger generations to preserve old loyalties. Simple consistent and broad-based enforcement of these and related contracts seem our best chance to entrench the enforcement of such contracts deep in legal practice. Our descendants should be reluctant to violate deeply entrenched practices of contract law for fear that violations would lead to further unraveling of contract practice, which threatens larger social orders built on contract enforcement.
As Chalmers notes in footnote 19, this approach is not guaranteed to work in all possible scenarios. Nevertheless, compare it to the ideal Chalmers favors:
AI systems such that we can prove they will always have certain benign values, and such that we can prove that any systems they will create will also have those values, and so on … represents a sort of ideal that we might aim for (p.35).Compared to the strong and strict controls and regimentation required to even attempt to prove that values disliked by older generations could never arise in any future descendant, enforcing contracts where older generations pay younger generations to preserve specific loyalties seems to me a far easier, safer and more workable approach, with many successful historical analogies on which to build.