March 8, 1994
This article originally appeared in Extropy 6:2 (1994).
Reprinted with permission.
The future is hard to predict. We may feel confident that eventually space will be colonized, or that eventually we'll make stuff by putting each atom just where we want it. But so many other changes may happen before and during those changes that it is hard to say with much confidence how space travel or nanotechnology may affect the ordinary person. Our vision seems to fade into a fog of possibilities.
The scenario I am about to describe excites me because it seems an exception to this general rule -- more like a crack of dawn than a fog, like a sharp transition with sharp implications regardless of the night that went before. Or like a sight on the horizon much clearer than the terrain inbetween. And, as scenarios go, this one seems rather likely. Here it is.
The human brain is one of the most complex systems we know, and so progress in understanding the brain may be slow, relative to other forms of technological and scientific progress. What if artificial intelligence (A.I.), the problem of designing intelligent systems from scratch, turns out to be similarly hard, one of the hardest design tasks we confront? 
If so, it may well be that technological progress and economic growth give us computers with roughly the computational power of the human brain well before we know how to directly program such computers with human-equivalent intelligence. After all, we make progress in software as well as hardware; we could now make much better use of a thirty year old computer than folks could the day it was built, and similar progress should continue after we get human-equivalent hardware. We don't know just how good human brain software is, but it might well be beyond our abilities when we have good enough hardware. 
Not having human-level A.I. would not mean computers and robots couldn't do better than us on many specific tasks, or that computer-aided humans wouldn't be many times more productive than unaided humans. We might even realize extreme "cyborg" visions, with biological brains and bodies wrapped in lots of artificial extras -- imagine heavy use of computer agents, visual pre-processors, local information banks, etc.
But not having human-level A.I. could mean that human intelligence continues to be very productive - that on average the amount of valued stuff that can be produced decreases by a substantial fraction when the amount of human labor used to produce that stuff decreases by a substantial fraction. Cyborg add-ons, without that brain inside, couldn't do nearly as much.
Thus, as today, and as standard economic models  predict, most folks would still spend much, perhaps most, of their time working. And most wealth would remain in the form of people's abilities to work, even if the median worker is incredibly wealthy by today's standards. We are, after all, incredibly wealthy by the standards of the ancients, yet we still work. In contrast, having loyal human-level A.I.s could be more like owning a hundred human slaves, each as skilled as yourself -- in this case there is hardly any point in working, unless for the pleasure of it.
A limited understanding of the brain and biology in general would also suggest that humans would not be highly modified - that whatever we would have added on the outside, inside we would be basically the same sort of people with the same sort of motivations and cognitive abilities. And we would be likely still mortal as well. After all, even biology has evolved the brain largely by leaving old complex systems alone; new functionality is mainly added by wrapping old systems in new add-on modules.
Imagine that before we figure out how to write human-level software, but after we have human-level hardware, our understanding of the brain progresses to the point where we have a reasonable model of local brain processes. That is, while still ignorant about larger brain organization, we learn to identify small brain units (such as synapses, brain cells, or clusters of cells) with limited interaction modes and internal states, and have a "good enough" model of how the state of each unit changes as a function of its interactions. The finiteness and locality of ordinary physics and biochemistry, and the stability of brain states against small perturbations, should ensure that such a model exists, though it may be hard to find. 
Imagine further that we learn how to take apart a real brain and to build a total model of that brain -- by identifying each unit, its internal state, and the connections between units.  A "good enough" model for each unit should induce in the total brain model the same general high-level external behavior as in the real brain, even if it doesn't reproduce every detail. That is, if we implement this model in some computer, that computer will "act" just like the original brain, responding to given brain inputs with the same sort of outputs.
That model would be what we call an "upload" -- software with human-level intelligence, yet created using little understanding of how the brain works, on anything but the lowest levels of organization. In software terminology, this is like "porting" software to a new language or platform, rather than rewriting a new version from scratch (more the A.I. approach). One can port software without understanding it, if one understands the language it was written in.
Of course some will doubt that such a brain model would "feel" the same on the inside, or even feel anything at all. But it must act just as if it feels, since it must act like the original brain, and so many people will believe that it does so feel.
Now without some sort of connection to the world, such an upload would likely go crazy or attempt suicide, as would most proto-uploads, not-quite good-enough brain-models that fail on important details like hormonal regulation of emotions. But with even very crude fingers and eyes or ears, uploads might not only find life worth living but become productive workers in trades where crude interaction can be good enough, such as writing novels, doing math, etc. And with more advanced android bodies or virtual reality, uploads might eventually become productive in most trades, and miss their original bodies much less.
Thus some people should be willing to become uploads, even if their old brains were destroyed in the process. And since, without A.I., uploads should be productive workers, there should be big money to be made in funding the creation of such uploads. The day such money starts to flow, uploads should begin to be created in significant quantity. This day would be the "dawn" I referred to above, a sharp transition with clear and dramatic consequences.
The consequences for the uploads themselves are the most immediate. They would live in synthetic bodies and brains, which could vary much more from each other than ordinary bodies and brains. Upload brain models could be run at speeds many times that of ordinary human brains, and speed variations could induce great variations in upload's subjective ages and experience. And upload bodies could also vary in size, reliability, energy drain, maintenance costs, extra body features, etc. Strong social hierarchies might develop; some might even be "gods" in comparison to others.
To a fast (meaning accelerated) upload, the world would seem more sluggish. Computers would seem slower, and so fast uploads would find less value in them; computers would be used less, though still much used. Communication delays would make the Earth feel bigger, and space colonization would seem a slower and more forbidding prospect (all else equal). Interest rates would seem smaller, making investing in the future less attractive for a given set of values.
Fast uploads who want physical bodies that can keep up with their faster brains might use proportionally smaller bodies. For example, assume it takes 10^15 instructions per second and 10^15 fast memory bits to run a brain model at familiar speeds, and that upload brains could be built using nanomechanical computers and memory registers, as described in [Drexler]. If so, an approx. 7 mm. tall human-shaped body could have a brain that that fits in its brain cavity, keeps up with its approx. 260 times faster body motions, and consumes approx. 16 W of power. Such uploads would glow like Tinkerbell in air, or might live underwater to keep cool. Bigger slower bodies could run much cooler by using reversible computers [Hanson].
Billions of such uploads could live and work in a single high-rise building, with roomy accommodations for all, if enough power and cooling were available. To avoid alienation, many uploads might find comfort by living among tiny familiar-looking trees, houses, etc., and living under an artificial sun that rises and sets approx. 260 times a day. Other uploads may reject the familiar and aggressively explore the new possibilities. For such tiny uploads, gravity would seem much weaker, higher sound pitches would be needed, and visual resolution of ordinary light might decline (in both angular and intensity terms).
Alternatively, uploads seeking familiarity might withdraw more into virtual realities, if such simulations were not overly expensive. For relaxing and having fun, virtual realities could be anything uploads wanted them to be. But for getting real work done, "virtual" realities could not be arbitrary; they would have to reflect the underlying realities of the physical, software, knowledge, or social worlds they represent. Since, compared with software we write, the human brain seems especially good at dealing with the physical world, and since dealing with physical objects and processes should remain a big part of useful work for a long time to come, many uploads should remain familiar with the physical world for a long time to come.
An intermediate approach between tiny bodies and virtual reality would be to separate brains from bodies. Brains might be relatively fixed in location, and use high-bandwidth connections to "tele-operate" remote bodies. Of course such separation would not be economical at distances where communications costs were too high relative to brain hardware costs.
Uploads might need to find better ways to trust each other. While ordinary humans can often find unconscious signs of deception in facial expressions, upload faces may be under more direct conscious control. And uploads minds could be tortured without leaving any direct physical evidence of the event.
If, as seems reasonable, upload brains are given extra wiring to allow the current brain state to be cheaply "read out" and "written in", then uploads could change bodies or brains relatively often, and could be transported long distances by ordinary communication lines. "Backups" could be saved, allowing near immortality for those who could afford it; if your current brain and body is unexpectedly destroyed, your latest backup can be installed in a new brain and body.
The most dramatic consequences for both uploads and everyone one else come, I think, from the fact that uploads can be copied as well as backed-up. The state of one upload brain might be read out and written into a new upload brain, while that state still remained in the original brain. At the moment of creation, there would be two identical upload minds, minds which would then diverge with their differing experiences.
Uploads who copy themselves at many different times would produce a zoo of entities of varying degrees of similarity to each other. Richer concepts of identity would be needed to deal with this zoo, and social custom and law would face many new questions, ranging from "Which copies do I send Christmas cards to?" to "Which copies should be punished for the crimes of any one of them?". 
New forms of social organization might be useful for families of copies of the same original mind; some families of copies might be very loyal, while others might fight constantly. Teams of people who work well together might even be copied together, creating "team families". Political institutions like "one man, one vote" might require substantial modification, though large copy families could find obvious candidates to represent them in legislatures.
Perhaps the most dramatic consequence of upload copying is the potential for an huge population explosion. If copying is fast, cheap, and painless, and if enough uploads desire to, can afford to, and are allowed to make such copies, the upload population could grow at a rate far exceeding the rate at which their total wealth grows, triggering a rapid reduction in per-capita (meaning per-copy) wealth.
Would an upload population explode? For a little perspective, let's review ordinary human population growth. In the short term one might take people's values  as given. In that case reproduction rates depend on values and per-capita wealth, and per-capita wealth depends on values and reproduction rates.
People choose to have more or fewer babies depending on their values and culture, how much such babies will cost them, the wealth they have to give, how much payback they expect to get from their children later, and on how their children's lifestyle will depend on family size. Technology and wealth also influence contraception and the number of babies who survive to adulthood.
Changes in per capita wealth, on the other hand, depend not only on reproduction rates, but also on how much folks value current consumption over future consumption, and on the rates of growth possible in physical, human, and knowledge capital. And knowledge capital growth rates seem to grow with the size of the human population [Simon].
The net result of all these factors is not clear from theory, but since we have observed rising per-capita wealth for the last few centuries, we might suppose the net tradeoff, given current values, favors rising per-capita wealth.
A few centuries is only about a dozen generations, however. And Darwinian arguments suggest that if values can be inherited, then after enough generations the values in a species should evolve to favor the maximum sustainable population for any given technology, and the maximum sustainable growth rate as technology improves [Hansson & Stuart].
This Darwinian view holds that our familiar human values, for resources, health, comfort, leisure, adventure, friendship, etc., were well suited for promoting maximal population and growth in the sort of environments our ancestors faced long ago. And this view suggests that any current conflict between values and maximal growth, such as that suggested by declining populations in Europe, is a temporary aberration due to "recent" rapid changes in the human environment.
Thus, given enough generations, human values should evolve to promote maximal growth in our new sorts of environments -- one may still worry, for example, that small minorities who value exceptionally large families  will eventually come to dominate the population.
Of course a complete story of how human values evolve must include the evolution of idea and value elements as "memes", entities in their own right and not just as properties passed from human parent to child through a combination of genetic and cultural evolution. But if our receptivity to accepting non-parental values can be genetically or culturally modulated, it is hard to see how human values could consistently resist human Darwinian pressures over the long term, even with memetic evolution. Overall, these Darwinian arguments suggesting maximal growth seem roughly right.
Fortunately, however, this Darwinian process seems slow, and if economic growth rates continue their historical acceleration, they should soon exceed the maximum rates at which ordinary humans can have babies. From then on, per-capita wealth would have to increase, at least until artificial wombs were created, or until raw materials or knowledge progress started to "run out", and could no longer expand exponentially with the population as they have so far. For now though, the world seems to be changing too fast for Darwinian evolution to catch up.
How do uploads change all this? An upload considering making a copy is much like a parent considering making a child. An upload would consider the cost to create a copy, the lifestyle that copy could afford, and how much they would value having another entity like themselves. Uploads may value having copies of themselves more or less than ordinary folks now value having children somewhat like them - this is hard to predict.
But what is clearer is that upload reproduction rates can be very fast -- the upload population could grow as fast as factories could generate new upload brains and bodies, if funds could be found to pay these factories. Upload copies, after all, do not need to be raised to adulthood and then trained in some profession; they are immediately ready to become productive members of society. Thus the main limitations on reproduction, and hence on Darwinian evolution of values, would become economic and political. Who would want to pay how much to make upload copies? And who would try how hard to stop them?
To separate some issues, let us first imagine an upload, a contract lawyer by trade, who is neutral on the subject of whether she would like more entities like herself around, but who is considering an offer from someone else to pay for the creation of a copy. For simplicity, imagine that the original would keep all unique possessions and exclusive associations, such a painting, spouse, or job, and that the copy will have to start from scratch.
Such an upload might plausibly agree to this copy if she decided such a copy would consider their life "worth living", better to have existed than not. And since this copy could earn wages as a contract lawyer, she might consider life worth living if those wages, plus interest on some initial wealth endowment, were enough to cover some minimum standard of living.
Note, however, that if an upload expects wages to be high enough above their minimum required income, they might agree to a copy even with a negative initial endowment. That is, if a copy were to be loaned enough money to buy their new brain and body, that copy might still find life worth living even under the burden of paying back this loan. 
If we now add in the original upload's values for having copies around, presumably positive for having more company but negative for the added wage competition, we should find that such an upload has some minimum expected income at which she would be willing to spin off copies. And given that this upload has decided to make a copy, she may or may not prefer to transfer some of the original's wealth to that copy.
Of course some uploads, perhaps even most, might not accept this line of reasoning. But those that do would, if not forcibly prevented, keep making copies until their minimum income threshold is reached. Thus if there are even a few such uploads , wages for contract lawyers should quickly fall to near the lowest wage any one such upload contract lawyer is willing to work for. At this point many previous contract lawyers would find themselves displaced, even though the total number of contract lawyers has risen. And a large fraction of all contract lawyers should be copies of that one upload!
Of course abilities vary, and the lack of an ordinary body could be a disadvantage for early uploads competing with ordinary workers , limiting the number of ordinary workers uploads could initially displace. And reduced diversity of thought among a large family of copies may put them at a disadvantage in trades which place a premium on creativity. But in many trades, like contract law, a large number of standardized workers might have special advantages, especially in reputation-building.
It also takes time for a labor market to absorb new workers; each job is somewhat different, and it takes time for people to learn each new job. Uploads running faster than ordinary humans might quickly master the relevant book-learning, but for most jobs most learning comes from watching and working with co-workers. At first, most co-workers will not be uploads, and most physical processes being managed would be tuned for ordinary human speeds, so being very much faster than usual may not be worth the cost of the faster hardware.
But as uploads became a larger part of the economy, upload communities which standardize on faster speeds would become more economical. If the rate at which faster uploads can grow wealth increases to match their faster speeds, then market interest rates should grow with the speed of such uploads. Slower individuals would then be much more tempted to save instead of consuming their wealth.
Falling wages should mean that, on the margin, labor is substituted for other forms of capital. So lower wage uploads should use fewer computer and other productivity aids, and hence seem less "cyborgish".
What about professions where no upload has prior training? Even if the cost to upload people were very high, or the number of volunteers very low, upload workers should still displace other workers, though at a slower rate. If the wage in some trade were above an upload's minimum, even considering the costs of learning that trade, and if loans could be arranged, copies would be created intending to master that trade.
The economics of training uploads could be much like the current economics of software. For example, labor "products" might be sold at substantially above marginal cost in order to recoup a large initial training cost. To control prices, some families might want to formally centralize their decisions about how many copies they make, so that each copy is no longer free to make more copies. In other families, informal mechanisms might be sufficient.
As with other software, uploads might reach capacity limits; after a few hundred or thousand years of subjective experience, uploads might go crazy in some now unknown way, or simply be less and less able to learn new skills and information. If this happens, then investments in training might be limited to backups made and saved when uploads are below some critical subjective age.
Also as with software now, illicit copying of uploads might be a big problem. An upload who loses even one copy to pirates might end up with millions of illicit copies tortured into working as slaves in various hidden corners. To prevent such a fate, uploads may be somewhat paranoid about security. They may prefer the added security of physical bodies, with "skulls" rigged to self-destruct on penetration or command. And without strong cryptography, they may be wary of traveling by just sending bits.
The analysis above suggests that, at least at first, the upload population should expand as fast as people can arrange loans, build brains and bodies, learn new jobs and professions, and as fast as the economy can absorb these new workers. Per-capita wages seem likely to fall in this period, for ordinary humans as well as uploads, though total wealth should rise.
This population explosion should continue until it reaches limits, such as those of values or of subsistence. Values limits would be reached if almost no capable, versatile upload found copies worth making at the prevailing low wages. Subsistence limits would be reached if uploads simply couldn't make ends meet on a lower income; lowering their standard of living any more would lower their productivity, and hence wages, by so much that they could not afford even that lower standard.
Would values limit this explosion? Yes, of course, if typical values were held constant; few people now who would make productive uploads would be willing to work at subsistence levels. It seems, however, that values will not be held constant. With upload copying, the potential rate and selectivity of reproduction could once again be comparable to the rate at which the world changes; Darwinian evolution (this time asexual) would have caught up with a changing world, and be once again a powerful force in human history. And since the transmission of values from "parent" to "child" is so much more reliable with upload copying, the direct evolution of "memes" should have even less room to modify our basic Darwinian story.
As wages dropped, upload population growth would be highly selective, selecting capable people willing to work for low wages, who value life even when life is hard. Soon the dominant upload values would be those of the few initial uploads with the most extreme values, willing to work for the lowest wages . From this point on, value evolution would be limited by the rate at which people's values could drift with age, or could adjust to extreme circumstances.
Investors with foresight should be able to make this evolution of upload values even faster than ordinary "blind" biological evolution. Investors seeking upload candidates, or upload copies, to whom to loan money, would likely seek out the few capable people with the most extreme and pliable values. After all, these candidates would, all else equal, have the best chances of repaying their loans.
Values might evolve even faster by combining crude modification techniques, like the equivalent of neuroactive drugs or even torture, with the ability to rerun experiments from identical starting points. Of course I do not advocate such experiments, but if they were effective, someone somewhere would likely use them. Fortunately, I suspect ordinary human values are varied and flexible enough to accommodate demand without resorting to such techniques. For example, identical twins who live together are much more different from each other than those reared apart. Similarly, an upload in a million-copy family should try all the harder to be different somehow, including in their values. Thus, given all these factors, the evolution of upload values might be very fast indeed.
What would values evolve to? Would wages hit subsistence level limits? I expect that over many generations (i.e., times copied) Darwinian selection should favor maximum long-term generation of "wealth" that can be used to buy new copies. That is, since upload reproduction can be so directly bought, we expect evolution to favor uploads whose values induce them to take actions which give their copy lineage the maximum long-term financial return on their investments, including their investments in new copies, new skills, or in "leisure."
Uploads who are overly shy about copying would lose out, holding less of the total wealth (as a group), measured by market value of assets, and constituting less of the population. Similarly, uploads who go wild in copying, just because they like the idea of having lots of copies, would become more numerous in the short term but lose out in the long term, both in total wealth and population. Thus we don't expect uploads to become as poor as possible, though we do expect them to eliminate consumption of "frills" which don't proportionally contribute to maximum long term productivity.
We should also expect an evolution of values regarding death and risk.  Imagine situations in which making a copy might pay off big, but most likely the copy would fail, run out of money and have to be "evicted" from its brain and body. Many people might decline such opportunities, because they so dislike the prospect of such "death". Others might consider this not much bigger a deal than forgetting what happened at a party because they were too drunk; "they" would only lose their experiences since the last copy event. I expect evolution to prefer the later set of values over the former.
Perhaps the hardest values to change in uploads will be our deeply-ingrained values for having children. Early upload technology would likely not be able to create a baby's brain from scratch, or even to upload a child's brain and then correctly model brain development processes. And even when such technology is available, children would likely be a poor investment, from a long-term growth point of view. New children may offer new perspectives, but with enough adult uploads, these benefits should only rarely exceed their high costs. Adults can offer new perspectives as well, and can do so cheaply.
Eventually, human-level artificial intelligence may be achieved at competitive hardware costs, or we may learn enough about the high-level organization of our brains to modify them substantially, perhaps merging distinct copies or splitting off "partials" of minds. The upload era would have ended, and many of the consequences of uploads described above may no longer apply; it seems particularly hard to project beyond this point.
But before then the upload era may last a long time, at least subjectively to uploads running at the dominant upload speed. If many uploads are fast, history will be told from the fast uploads' point of view; history cronicles wars and revolutions, triumphs and disasters, innovations and discoveries, and cares little about how many times the earth spins.
If voters and politicians lose their composure at the mere prospect of genetic modification of humans, or of wage competition by foreign workers, imagine the potential reaction against strong wage competition by "machine-people" with strange values. Uploading might be forbidden, or upload copying might be highly restricted or forbidden. Of course without world government or strong multi-lateral agreements, uploads would eventually be developed in some country, and the transition would just have been delayed. And even with world government, covert uploading and copying might happen, perhaps using cryptography to hide.
If level heads can be found, however, they should be told that if uploading and copying are allowed, it is possible to make almost everyone better off. While an upload transition might reduce the market value of ordinary people's human capital, their training and ability to earn wages, it should increase total wealth, the total market value of all capital, including human capital of uploads and others, real estate, company stock, etc. Thus it can potentially make each person better off.
For example, if most non-uploads had about the same fraction of their wealth in each form of capital, including owning shares in firms that make loans to uploads, and if a large enough fraction of upload wages went to pay off such loans, then most non-uploads would get richer from the transition. Even if you weren't one of the highly-copied uploads, your reduced wage-earning ability would be more than compensated for by your increased income from other sources. You could stop working, yet get richer and richer. By uploading and resisting copying, you could become effectively immortal.
The per-capita wealth of highly-copied uploads might decline, but that would not be a bad thing from their point of view. Their choice would indicate that they prefer many poorer copies to a single richer copy, just as parents today prefer the expense of children to the rich life of leisure possible without them.
Could a big fraction of upload wages go to paying loans? Yes, if there is enough competition between uploads, and if investors are not overly restricted by law. For example, refusing to loan to an upload if any other copy in their family has purposely defaulted on a loan might discourage such behavior. Alternatively, loans might be made to a copy family as a whole. But these options would have to be allowed by law.
Could most non-uploads sufficiently diversify their assets? Yes, if we develop financial institutions which allow this, such as allowing people to trade fractions of their future wages for shares in mutual funds. But tax laws like those that now encourage highly undiversified real estate holdings could cause problems. And even if people are able to so diversify their assets, they may not choose to do so, yet later demand that politicians fix their mistake.
If forced to act by their constituents, politicians would do better to tax uploads and copies, rather than forbidding them, and give the proceeds to those who would otherwise lose out.  Total wealth would grow more slowly than it otherwise would, but grow faster than without uploads. Of course there remains the problem of identifying the losers; politicals systems have often failed to find such win-win deals in the past, and could well fail again.
What about those who have values and abilities compatible with becoming part of the few highly-copied uploads? Would there be great inequality here, with some lucky few beating out the just-as-qualified rest?
If the cost to create an upload brain model from an ordinary brain were very high relative to the cost of creating a copy of an upload, or if computer hardware were so cheap that even the earliest uploads were run very fast, the first few uploads might have a strong advantage over late-comers; early uploads may have lots more experience, lower costs, and may be a proven commodity relative to new uploads.  Billions of copies of the first few dozen uploads might then fill almost all the labor niches.
Computer technology should keep improving even if work on uploading is delayed by politics, lowering the cost of copying and the cost to run fast. Thus the early-adopter advantage would increase the longer uploading is delayed; delaying uploading should induce more, not less, inequality. So, if anything, one might prefer to speed up progress on uploading technology, to help make an uploading transition more equitable.
Similar arguments suggest that a delayed transition might be more sudden, since supporting technologies should be more mature. Sudden transitions should risk inducing more military and other social instabilities. All of these points argue against trying to delay an upload transition. 
Contrary to some fears, however, there seem to be no clear military implications from an upload transition, beyond the issue of transition speed and general risks from change. Yes, recently backed-up upload soldiers needn't fear death, and their commanders need only fear the loss of their bodies and brains, not of their experience and skills. But this is really just the standard upload trend toward cheaper labor translated into the military domain. It says little about fundamental military issues such as the relative expense of offense vs. defense, or feasible military buildup speeds vs. economic growth rates.
What if uploads decide to take over by force, refusing to pay back their loans and grabbing other forms of capital? Well for comparison, consider the question: What if our children take over, refusing to pay back their student loans or to pay for Social Security? Or consider: What if short people revolt tonight, and kill all the tall people?
In general, most societies have many potential subgroups who could plausibly take over by force, if they could coordinate among themselves. But such revolt is rare in practice; short people know that if they kill all the tall folks tonight, all the blond people might go next week, and who knows where it would all end? And short people are highly integrated into society; some of their best friends are tall people.
In contrast, violence is more common between geographic and culturally separated subgroups. Neighboring nations have gone to war, ethnic minorities have revolted against governments run by other ethnicities, and slaves and other sharply segregated economic classes have rebelled.
Thus the best way to keep the peace with uploads would be to allow them as full as possible integration in with the rest of society. Let them live and work with ordinary people, and let them loan and sell to each other through the same institutions they use to deal with ordinary humans. Banning uploads into space, the seas, or the attic so as not to shock other folks might be ill-advised. Imposing especially heavy upload taxes, or treating uploads as property, as just software someone owns or as non-human slaves like dogs, might be especially unwise. 
Because understanding and designing intelligence is so hard, we may learn how to model small brain units before learn how to make human-level A.I. Much will have changed by that time, but an upload transition would be so fundamental that we can still forsee some clear consequences. Subjective lifespans could be longer, minds could run faster, and reproduction could be cheaper, faster, and more precise. With human labor still in demand, an upload population should explode, and Darwinian evolution of values should once again become a powerful force in human history. Most uploads should quickly come to value life even when life is hard or short, and wages should fall dramatically.
What does this all mean for you now? If you expect that you or people you care about might live to see an upload transition, you might want to start to teach yourself and your children some new habits. Learn to diversify your assets, so they are less at risk from a large drop in wages; invest in mutual funds, real estate, etc., and consider ways in which you might sell fractions of your future wages for other forms of wealth. If you can't so diversify, consider saving more. 
Those who might want to be one of the few highly copies uploads should carefully consider whether their values and skills are appropriate. How much do you value life when it is hard and alien? Can you quickly learn many new skills? Can you get along with people like yourself? And such people might consider how they might become one of the first uploads.  Those who don't want to be highly-copied uploads should get used to the idea of their descendants becoming a declining fraction of total wealth and population, of leaving a rich but marginalized lineage.
If you participate in political or social reform, you might consider sowing seeds of acceptance of an upload transition, and of the benefits of an integrated society, and might consider helping to develop institutions to make it a win-win outcome for everyone. And if you research or develop technology, consider helping to speed the development of upload technology, so that the transition is less sudden when it comes.
2. We might well have good enough hardware now for a slow A.I. that doesn't deal much with the physical world -- say an A.I. contract lawyer.
3. Consider a model where utility is roughly a product of powers of leisure and consumption, and amount produced is roughly a product of powers of labor and other capital. Such a model can explain why leisure time has not changed much as per capita wealth has increased dramatically over the last few centuries, can explain high leisure among slave owners, and explains why leisure is higher in places and times with high income taxes. One can explain seasonal high leisure among foraging tribes as due to seasonal limits on foraging productivity.
4. Roger Penrose, in The Emporer's New Mind, suggests that non-local corrections to quantum gravity may play an important role in the brain; I find this extremely unlikely.
5. See [Merkle] for an exploration of the near-term feasibility of this, and [Platt] for a fictional account.
6. A viable, though perhaps not optimal, alternative is to hold all copies responsible for the actions of any one of them. If punishment is by fine when possible, then copy families could use insurance to contract away this interdependence.
7. By "values", I mean all preferences, desires, moral convictions, etc.
8. The Hutterites, a U.S. religion group, has averaged 9 kids per family for a century.
9. Such a loan might come from the original upload or any other source, and might involve more risk-sharing than a simple loan -- more like a joint investment.
10. Meaning enough so that they can't effectively conspire to keep their wages high.
11. Thus janitorial jobs should be safer longer than programmer jobs.
12. These wages are per product produced, not per time spent.
13. It seems that evolution should favor values that are roughly risk-neutral over the long term, with utility linear up to near the point of total world wealth. This seems to imply values roughly logarithmic in returns to short independent periods. 14. Note that such a tax would be a tax on the poor, paid to the relatively rich, if one counted per upload copy. 15. Many initial uploads might well be cryonics patients, if legal permission to dissect and experiment with their brains were easier to obtain.
16. Note that, in contrast, a delayed nanotechnology assembler transition seems likely to be less sudden, since pre-transition manufacturing abilities would not be as far behind the new nanotech abilities. Efforts to "design-ahead" nanotech devices, however, might make for a more sudden transition.
17. A similar argument applies to A.I.s
capable of wanting to revolt.
18. This is, by the way, the same strategy
that you should use to prepare for the possibility that A.I. is
developed before uploads.
19. Cryonics patients might want to grant
explicit permission to become uploads.
K. Eric Drexler,
, John Wiley & Sons, Inc., New York, 1992.
Robin Hanson, "Reversible Agents: Need Robots Waste Bits to See, Talk, and
Achieve?", Proc. 2nd Physics of Computation Workshop, 1992.
Ingemar Hansson, Charles Stuart, "Malthusian Selection of Preferences",
American Economic Review, June 1990, V.80 No. 3. pp.529-544.
Alan R. Rogers,
Evolution of Time Preference by Natural Selection, AER 84(3)460-481. (June 1994)
Merkle, "Large Scale Analysis of Neural Structures", Tech Report
CSL-89-10, Xerox PARC, 3333 Coyote Hill Road Palo Alto, CA 94304, 1989.
Charles Platt, The Silicon Man, Tafford Publishing, Houston, 1991.
Julian Simon, The Ultimate Resource, Princeton University Press, 1981.
This paper is better for the thoughful comments on earlier drafts by
Stuart Card, Hal Finney, Daniel Green, Josh Storrs Hall, Nancy Lebovitz,
Hans Moravec, Max More, Jay Prime Positive, Mike Price, Marc Ringuette,
Nick Szabo, and Vernor Vinge, and because of prior discussions of related
issues with David Friedman, Keith Henson, Richard Kennaway, David Krieger,
Tim May, Ralph Merkle, Perry Metzger, Mark Miller, Ravi Pandya, and Steve
Witham. Many of these discussions took place on the Extropians mailing
Converted to HTML by Joe Strout on June 21, 1995.
18. This is, by the way, the same strategy that you should use to prepare for the possibility that A.I. is developed before uploads.
19. Cryonics patients might want to grant explicit permission to become uploads.
K. Eric Drexler, Nanosystems , John Wiley & Sons, Inc., New York, 1992.
Robin Hanson, "Reversible Agents: Need Robots Waste Bits to See, Talk, and Achieve?", Proc. 2nd Physics of Computation Workshop, 1992.
Ingemar Hansson, Charles Stuart, "Malthusian Selection of Preferences", American Economic Review, June 1990, V.80 No. 3. pp.529-544.
Alan R. Rogers, Evolution of Time Preference by Natural Selection, AER 84(3)460-481. (June 1994)
Ralph Merkle, "Large Scale Analysis of Neural Structures", Tech Report CSL-89-10, Xerox PARC, 3333 Coyote Hill Road Palo Alto, CA 94304, 1989.
Charles Platt, The Silicon Man, Tafford Publishing, Houston, 1991.
Julian Simon, The Ultimate Resource, Princeton University Press, 1981.
This paper is better for the thoughful comments on earlier drafts by Stuart Card, Hal Finney, Daniel Green, Josh Storrs Hall, Nancy Lebovitz, Hans Moravec, Max More, Jay Prime Positive, Mike Price, Marc Ringuette, Nick Szabo, and Vernor Vinge, and because of prior discussions of related issues with David Friedman, Keith Henson, Richard Kennaway, David Krieger, Tim May, Ralph Merkle, Perry Metzger, Mark Miller, Ravi Pandya, and Steve Witham. Many of these discussions took place on the Extropians mailing list (email@example.com).