(Version presented at Western Economics Association meeting, 6/93)
by Robin Hanson
When several mechanisms might fund some particular type of basic research, the choice between them can be subtle. For example, prizes can induce wasteful races to be first with results, and patent monopolies can induce underuse of research results. But for many research funding situations, no known mechanism seems particularly attractive. Thus this paper will content itself to propose a novel mechanism for funding some types of basic research, and to describe qualitatively how this mechanism might plausibly address common agency problems. No claim is made about optimality, and detailed models comparing welfare under this mechanism and alternatives are left to another day.
The mechanism proposed here is an "information prize", and can be considered a variation on ordinary accomplishment-based prizes. Ordinary prizes seem attractive when research patrons can reasonably estimate the value to them of a particular accomplishment, "done" primarily by one identifiable person or group. For example, the U.S. Human Genome Project might have been funded by offering a prize per base-pair, given to the first group to sequence each part of the genome.
However, research patrons can often better estimate the value to them of answering certain questions. For example, a patron might want to know how much sea levels would rise given various rates of greenhouse gas emissions, yet have little idea what accomplishments might be how valuable toward this goal. Research intended to help illuminate public policy, as in this greenhouse example, is often of this answer-oriented type.
"Information prizes" are suggested for situations where research patrons can reasonably estimate the value to them of "answering" some question they expect may eventually be relatively "objectively" answered. For example, a patron might offer an information prize directly on the question of future sea levels given greenhouse emissions. "Answering" just means changing the patron's estimated probabilities regarding various possible answers to such a question.
Like a first-to-file patent, an information prize can be thought of as being awarded to the first person who files a claim saying they know the answer, who pays a substantial filing fee, and whose answer is validated by some later judging process. Except that filed claims actually state the probability of some answer, many filed claims can be rewarded, and the whole process is much more flexible.
To fund an information prize, a patron simply subsidizes a certain automatic broker which trades in assets contingent on various possible answer to a question. This creates a market in those asset with a given thickness, and offers an inducement for researchers to become well-informed about those possible question answers. Informed traders can acquire risk relative to the question, and eventually expect to profit thereby when the question and assets are "settled", or when others become convinced enough of their research results to hold this risk.
Like an accomplishment-based prize, an information prize does not require that the research patron have special knowledge about who would be good to do some research, what methods would be appropriate, or when the subject is ripe for investigation. Unlike an accomplishment-based prize, an information prize also does not require that the patron indentify who contributed how much to the eventual result. And in addition to serving as an patronage institution, information prizes can also serve as a consensus institution, generating temporary consensus on policy relevant claims, a consensus resistant to direct partisan influence.
A major problem with the information prize is that it is in general illegal to trade in contingent assets. General market incompleteness is not, however, a substantial problem here; patrons create markets where none may have existed through their subsidized brokers. As with ordinary prizes, information prizes can function with only one patron and one person working to win it. Various search and other transaction costs, however, would limit how small an information prize could be effectively offered.
This paper will first qualitatively review a wide range of mechanisms for funding basic research, review prizes in more detail, and then focus specifically on the information prize, describing each of its aspects in some detail.
Early in the scientific revolution, there were a few individually wealthy researchers, such as Tycho Brahe (~1575), who combined the role of patron and researcher. But more often a scientist like Galileo (~1600) would have a personal patron, who held "bragging" rights about that scientist and could dictate topics of research. For example, Galileo's patron had a friend whose scientist had theorized that comets were like planets. So Galileo, though sick in bed, was directed to come with a competing theory. Galileo then published a book claiming that comets were in the atmosphere.
The Catholic Church also funded Jesuit researchers around this time, researchers a lot less doctrinaire than the church supposed. Also around this time, a British monarch in need of cash decided to sell monopolies, called "patents" on all different types of production". Their use has slowly eroded to today's focus on state-granted monopolies of new types of production.
The scientific revolution came under full steam with the introduction of "acadames" (~1675), which offered "brand names" for patrons donations, spent initially on meeting places, instruments, journals, and some administrative salaries. For example, King Charles II of England was the founding patron of the Royal Society of London, and was fond of laying wagers on the outcome of the Society's experiments.
Submissions to academy journals were "peer-reviewed", though this didn't quite mean the same thing as it does today. Experiments, to be accepted, had to be demonstrated at a meeting of the academy, and theoretical analyses tended to be checked in detail by academy members.
Soon after the first academes (~1700), prizes, administrated by academes, became a major form of science funding [He]. A donor would agree to sponsor a specific prize, to be judged by a specific academy. The prize would be for a specific accomplishment, the best essay on a particular subject, or sometimes for any discovery the prize administrators deemed worthy. Many researchers also made money giving public lectures and demonstrations.
Academes and prizes dominanted for about a century, but then (~1800) universities began to respond to declining enrollment and prestige by seeking faculty who had made names for themselves in research, and by expecting research from their professors. Thus science came to be funded in part through a tax on students. Until 1830, public lotteries also funded many U.S. universities like Columbia, Harvard, and Yale.
Soon after (~1830), there was also a broader amateur science movement in Britain. Many non-wealthy folks did research in their spare time, and paid many professionals to give popular lectures.
Credential inflation didn't produce the doctorate degree until ~1900, and around this time in the U.S. the captains of industry began patronage under the corporate model. Patrons created foundations, which used strong managers to monitor and direct the research of those they funded.
It wasn't until around mid-century that our familiar academic world fell into place. While most universities have retained most of their faculty until they retired, our modern concept of "tenure" as an absolute right to such faculty positions did not appear in the U.S. until about World War II. Soon after, the U.S. began massive federal funding, using the proposal peer-review method, wherein research proposals are reviewed by panels of other researchers doing similar work.
In recent decades states have offered tax credits for industrial research, and similar tax exemptions (or matching funds) for intellectuals have been offered for centuries.
In recent years many people have expressed dissatisfaction with current funding methods. People complain that peer review is now just a popularity contest ruled by an old boy network, resulting in ruling fashions and the rejection of the truly new. Journals are said to neglect interdisciplinary work, and encourage co-authorship of smallest publishable units. Proposal peer-review is said to depend too much on promises, rather than on track-records. And university based researchers are said to neglect teaching.
Recent suggestions for change range from suggestions to give out money randomly to "qualified" applications [Gi], to suggestions to just give $1M a year to the top thousand scientists, chosen by an recursive popularity poll [By]. Folks have suggested that universities and private labs be funded in proportion to their publication or citation count [Ro], that we abolish tenure or government funding, or that we return to previous funding mechanisms like prizes.
The basic problem of science patronage is this. How can a relatively ignorant patron spend money to induce the most progress toward that patron's research goal, and avoid paying for incompetence or laziness? This is a form of the familiar principal/agent problem, similar to the problem of holding lawyers, doctors, politicians, or any other type of representative accountable to their clients.
The ideal form of accountability is when the principal and the agent, in this case the patron and the researcher, are one and the same person, either a dedicated amateur or a wealthy professional. This arrangement should offer the most productivity toward the patron's goals.
Beyond this, the two standard categories of accountability approaches are contracting and monitoring [Mi]. Contracting relationships are also called price or market or arms-length relationships, while monitoring relationships are also called command or management relationships. While most real relationships are mixtures of these two approaches, it can help to think in terms of the limiting cases. A contract tries to hold people accountable by offering clear incentives, paying only for results rather than effort or qualifications, and by open competition for contracts. A monitor, in contrast, tries to keep people accountable by watching what those people do, judging quality and checking for various kinds of "cheating".
Both approaches have their advantages, and how much one should rely on contracting vs. monitoring depends on the situation. The effectiveness of both approaches depends on the quality of the indicators used to estimate research quality and effort, and on the cost of obtaining those indicators.
The patent system is more like formal contracts, though the state does some managing of what is and isn't patentable. Patents induce monopoly losses in the use of ideas, but have the nice property that political patrons who authorize the patent process need not estimate the potential value of each patent. Patents seem hard to apply to the results of basic research, however, as it seems hard to indentify or define the use of general insights.
Prizes are also contract-like, especially prizes for well-defined accomplishments which leave little room for discretion by judges, such as the prize for the first human-powered flight or for sequencing the human genome. Contests of skill, such as between robots or statistical techniques, similarly allow patrons to encourage the development of relevant techniques and abilities without substantial judging discression. Information prizes, proposed in this paper, are also in this category.
Researchers with personal patrons (and there are still a few) usually have a close personal relationship; the patron has a personal interest in the subjects researched, and may monitor research progress in some detail. Academes and foundations can extend this monitoring approach through more levels of indirection. Corporations manage research divisions, though management of research is considered to be particularly difficult.
Direct state management of research labs is common and forms a long chain of indirection in monitoring. Citizens are supposed to monitor their elected representatives, who monitor heads of government agencies, who monitor subordinates on down to the lowly researcher. Each extra level of indirection, however, introduces new risks of accountability and communication failures. (Do you know what your NASA researchers are doing today?)
A third general approach to accountability is to "piggyback" on some other accountability mechanism. Most social institutions rely to some extent on support from their wider social context. For example, a bank may rely in part on a wider legal system to prevent thefts, in addition to more direct monitoring and security precautions. And law may rely on wider moral attitudes or desires for revenge. Thus in practice most institutions rely on some mixture of contract, monitoring, and piggyback accountability.
Regarding research, if the commercial market can keep corporations honest about what research to do, then governments might try to encourage research by offering tax breaks for whatever accountants label as "research". People who prefer to donate to "big name" universities are similarly giving "matching funds", matching money from other funding sources.
Piggyback accountability can fail, however, if relied on too heavily. For example, piggyback research funding should fail if it becomes too large a fraction of total funding. At some point the system should reorient itself to respond directly to the piggyback incentives. For example, given a large enough research tax break, accountants might try to label everything their company does as "research".
Imagine a field of research dominated by dedicated amateurs and wealthy professionals, each with a strong "genuine" love of science, and each evaluating other researchers mainly in terms of how those other researchers contribute to progress in their own fields. A patron who respected and identified with the values of some such researchers might well want to use their evaluations as a basis for some small extra funding. University professors, if paid poorly and held in low regard in the larger society, might also fit this model.
Taken too far, however, peer-review can fail like any other piggyback funding mechanism. In the limit, for example, "insider" reviewers might give high evaluations to other insiders regardless of what they submitted for review. Insider topics of interest might displace patron interests, and insiders might not even be the best people to research these topics. Worse, if it is not obvious who is or is not an insider, much resources might be wasted in efforts to signal insidership. Jargon, fast moving research fashions, and excessive use of difficult to master techniques might all result from such signaling pressures. Less extreme failure modes include reviewers who spend too little time considering their reviews, or who rely on high variance indicators of research quality.
The chance of suffering such popularity-dominated equilibrium, or related undesireable equilibria, should depend on many factors. It should depend on the percentage of funding and status allocated through peer review, and on the the percentage of people who would be doing this research even if they were paid little or nothing. It should depend on the ease with with the ultimate patrons can monitor peer reviewers, an ability reduced by long chains of command.
Biased peer review should be easier for patrons to monitor when reviews are strongly constrained, with explicit evaluation criteria leaving little room for judgement calls. For example, though judges of a specific accomplishment prize are in some sense peers reviewing, these judges may find it hard to deny a prize to someone who appears to have met specific official criteria, even if that winner not a favored insider.
There should also be less judging flexibility and room for bias when more reliable indicators of research quality are available, such as when more time has passed and when evaluations are made further "downstream" from the research process. Thus paying per citation should beat paying per publication, which should beat paying for research proposals, which should beat paying per a direct popularity poll. There may also be less flexibility in estimating the value of larger contributions to science, than for the more frequent small contributions. If so, things like the Nobel prize might be safer funding mechanisms.
So where do we stand now? Federally funded basic research rarely uses contract accountability, and the many levels of indirection discourage accountability by monitoring, leaving a heavy reliance on piggyback accountability through proposal peer-review. Proposal peer-review offers reviewers more flexibility than perhaps any other historical variation on peer-review, with evaluations made far "upstream", at a relatively fine granularity, and using few explicit judging criteria. And relative to the total population, we now have more researchers than ever before, researchers who are paid and regarded relatively well, so the fraction of scientists who would be researching even if they weren't paid to do so is probably at an all-time low. And a very high percentage of rewards to basic researchers now come through peer review.
Thus we seem very far from the ideal peer-review scenario sketched above, and may suffer an all-time high risk of piggyback funding failure. While we've had heavy state funding through proposal peer review for many decades now, the transition to a popularity equilibrium might require an academic generation or two. Perhaps even now, research in many fields have largely lost touch with the interests of the citizens who patronize that research through their taxes.
If we are unwilling to substantially reduce direct federal funding of basic research, perhaps we should consider relying more on contractual approaches to accountability. And even private research patrons may wish to reconsider their reliance on piggyback accountability. Thus we may want to reconsider prizes.
Prizes require researchers to risk their own time and capital in their attempt to win, rewarding only successful researchers after their work is done. While some researchers might now have the capital and risk-tolerance to fund their own attempts, most would probably become employees of private research labs, selling most of their rights to prize winnings in return for a stabler income. (In fact, one of the first joint-stock court cases involved a corporation formed for the purpose of winning a prize.) Universities might also serve this function for their researching professors. Private labs and universities would then evolve and compete to best judge which people and projects held the best potential to win offered prizes.
Relative to other forms of patronage, prizes offer patrons unique abilities to tie rewards offered to quality judgements made at a rather distance future date. Research labs might even compete to win prizes to be awarded well after the death of their current researchers, especially if they could expect to sell their prize rights to speculators as soon as they have a plausible case for being the winner. Other forms of patronage typically require validation by more contemporary peers.
Through prizes, research patrons can also reduce, but not eliminate, their reliance on accountability by monitoring and piggyback. Patrons would still need to choose, or have agents choose, which prizes to offer, and who to have judge those prizes. While patrons might often choose prizes without consulting potential winners, they might also solicit and consider suggestions by researchers regarding possible prizes. And the more flexibility that prize judges are given, the more they should be monitored for bias and laziness.
Organizations that compete for prizes would also require monitoring internally, presumably often including review by peers within the organization. This fact, however, does not make a prize system equivalent to current peer review any more than the existence of monitoring within corporations makes competing car manufacturers equivalent to direct state production of cars. The open market for cars, or prizes, imposes a non-trivial discipline on internal corporate peer review.
Patrons should be able to reasonably estimate their value for many specific accomplishments, and therefore feel comfortable creating prizes for such accomplishments. Many future observations and measurements can be forseen, but are as yet too difficult to do, and many current theoretical puzzles are expected to eventually be solved. Note that patrons need not estimate well when or if a prize may be won, since a prize not won costs them only the effort to define and advertize that prize, and some tying up of assets.
Often, however, patrons with some research goal in mind may have difficulty identifying either a specific relevant accomplishment they would feel good about funding, or a contest of skill they think would be relevant enough. Eighteenth century patrons in this situation often offered prizes for the best paper on a certain topic by a certain date. This allowed more room for judging bias, and judges often played favorites by, for example, knowing each other's handwriting. But such paper prizes were still not the same as rewarding per publication, as there were far fewer prizes than papers published. For example, all of France had about ten prizes a year, while the Paris Academy alone published one hundred and fifty papers per year [He].
Prizes for the best of a certain thing typically require choosing a deadline. This can be a specific date, but can also be triggered by any other event the patrons choose to define, such as when a certain measurement precision has become standard, or when some other prize has been won.
As with any other funding method, patrons would have to make choices about funding granularity, here between many small prizes or fewer large ones. For example, one might imagine a minimal variation on existing proposal peer-review, approving a number of prizes similar to the current number of approved proposals, each judged by a similar peer-review effort.
Less minimally, patrons might prefer fewer large prizes, as they did in the eighteenth century. It may be easier for patrons to estimate their value for bigger accomplishments, prize judges can be paid to be more thorough, and judges can be monitored more carefully by patrons and others. Fewer large prizes also reduces overhead costs for patrons to create prizes and for research labs to search for prizes they might want to compete for.
Limiting prizes to only major advances, however, may create commons problems regarding the many small accomplishments required before a big one is possible. Smaller accomplishments for which it is difficult to subcontract may be done in secret redundantly by many researchers, or researchers might rely more on informal quid-pro-quo information exchanges. Of course similar granularity issues arise with all funding mechanisms.
If the expense of judging becomes a limit on prize granularity, then a simple prize variation can effectively make judging much cheaper, at a cost of increased but random risk to research labs. For example, ten thousand dollars might be set aside to fund a prize on some subject, and then after efforts to win have likely been made, that money could be gambled, resulting in a one percent chance of a million dollar prize. If the gamble is lost, no prize judging is needed, as there is no prize, and if the gamble is won, a small fraction of the million dollar prize may pay for adequate judging. For research labs expecting to win hundreds of such prizes, the added risk from this approach may be minimal.
Prizes funded by nations intending to improve their internal economy, but wary of international free riders, might disqualify candidates from other nations, perhaps canceling a prize if clearly "won" by a foreign candidate. Prizes can induce too many competitors if the prize amount is set too high, but there should exist some prize amount without this problem.
Simple economics models suggest prizes have many unappreciated virtues. Comparing prizes to patents and directly contracting for research, one model [Wr] suggests "prizes are best for research areas with intermediate success probabilities, and for all areas where supply is inelastic, as is likely if there are essential research-specific inputs, unless the overall research venture is almost certain to succeed." This analysis also suggests that no prize should be awarded if more than one group would qualify to win a prize, and if it were possible to determine this fact.
If prizes were to again dominate the funding of basic research, researchers might again become more accountable to their patrons. Patrons using prizes and patrons relying on piggyback peer-review should both benefit.
To allow more direct patronage of such questions, this paper suggests an "information prize". A patron, or their agent, would approve a specific question to be answered, and a set of possible answers to that question. For example, "Is the spatial curvature of the universe on the scale of ten billion light years positive (closed), negative (open), or within error E of zero?" The patron would then define how decisions are to be made regarding who might judge this question when, and authorize the creation and sale of assets contingent on the different possible answers to this question.
Finally, the patron would allocate a certain pot of money to subsidize a dumb broker to buy and sell these assets within some open marketplace, and advertize this fact. Researchers would then have an incentive to learn more about, for example, the large scale curvature of space, in order to make money by being the first to trade with the dumb broker. Basically, the patron offers researchers the chance to "bet" with an actor ignorant about this important science question. The market "odds" can then provide a consensus which policy makers can defer to. Each of these aspects can be explained in more detail.
The process by which debates become closed and opinions converge is complex and not entirely understood. But research patrons fairly universally accept the process as valid, at least over the longest time scales and assuming enough diverse and relatively autonomous people spent enough time and effort engaging the question. Given these conditions, most patrons accept an answer backed by such a consensus as "right" and "inevitable", in the sense that their efforts could not much influence which answer is chosen, only whether and when such a solid consensus is formed. Thus patrons who fund research into such questions mainly want "progress"; they want to speed the rate of opinion convergence toward the "right" answer. They may also want to encourage the development of skills in those who contribute to such convergence.
On questions where patrons can forsee convergence being likely and valuable, information prizes can allow them to encourage such convergence. For example, most questions about the chemical and physical properties of common materials or fundamental particles are likely "well-posed", and will eventually answered, as are most questions about the causes and consequences of biological or geophysical processes. Most currently proposed physics theories should turn out to be very wrong, and a few much less wrong than the rest.
Social theories may be harder to distinguish at the theoretical level. But their policy consequences can be observed, and patrons are arguably more interested in these anyway. For example, while it might never be abstractly clear whether capital punishment deters murder, we could still validate specific predictions of how many murders would be committed in some area conditional on whether or not some type of capital punishment was in place there, and using an information prize a patron could fund efforts to make such predictions. Many historical questions, such as whether some politician knew of or participated a famous scandal or misstep, are also often answered eventually by various revelations.
Research into estimating which technologies will develop how far by when can also be funded using information prizes. In theory, a large prize might create moral hazard, giving someone an incentive to retard technological progress in order to win a prize. But people can now sell short the stock of public companies developing technology, and harm rarely results. Moral hazard is a concern, but need not be a major hindrance.
Using a judging lottery mechanism to be described below, information prizes can also be created for questions which are unlikely to be answered, but could easily be if enough resources were devoted to them. For example, in remote sensing images of land or ocean, each image pixel makes a claim about average reflectivity, color, etc. of a particular spatial region, each each such claim could be checked by direct ground observations, given sufficient resources. An information prize could be offered on each pixel value. Astronomical imaging has similar properties. Claims about human sexual behavior might even be similarly validated if it were possible to induce relative honesty in randomly selected people by offering them large enough payments.
Patrons should of course attempt to have the claims they subsidize worded carefully, making assumptions explicit and using precise and neutral language in order to minimize the chance of the question later being considered too vague to judge. Such efforts to word claims need not be overwhelming; most research proposals even now identify specific questions they intend that research to be relevant for, with many proposals referring to similar questions.
The main risk of judging flexibility, and therefore possible bias, however, actually comes from questions where it never becomes clear whether or not that question will ever become clear enough to judge. If it becomes clear that the question can't be judged clearly, the question answer chosen can just be "this question too vague to judge" (if the question definition allowed this answer).
Note that the number of contingent assets in circulation on a question can rise and fall with interest in the question. Note also that the average return on such a set of contingent assets is just the average return on the base asset, which can be large for assets highly correlated with the total market.
Some simple variations may be worth considering. In addition to allowing "this question too vague to judge" answers, judges might be authorized to assign a percentage of validity to each of the possible answers. Each asset could then be exchanged in the end for its percentage of X.
For a question about the value of some numerical variable V, one might
split an X into ranges such as "X if V<=0", "X if 0
Trading in contingent assets is in general prohibited by anti-betting laws
through most of the world, though the United Kingdom allows it under heavy
regulation. In the U.S., Nevada allows betting on sporting events, but not
more generally. Therefore information prizes require legislative changes
before they could be implemented, unless state patrons want to try it, as
they are usually exempted from anti-gambling laws.
Amounts offered at each price should also depend on how "thick" the market
is. In a thick market the price will not move far in response to a single
trader buying or selling large amounts. In a thin market the price can
move a great deal in response to small trades, giving much less expected
profit to someone with superior information about the future market price
of the asset.
It turns out that a patron can subsidize a market simply by directly
thickening it, ensuring that many substantial offers to buy or sell will be
available near any given market price. One way to do this is for a patron
create a price ladder, holding an offer to trade at each possible price
"rung" on the ladder. The rung at price p would hold either the asset "X
if A", which could be exchanged for the fractional asset "p of X", or it
would hold "p of X", which could be exchanged for "X if A". In practice,
rungs above the market price would hold "X if A" and rungs below the market
price would hold "p of X", with p different for each rung.
The maximum cost of funding such a price ladder, and therefore the total
available research subsidy, can be computed by realizing that in the end an
fully informed market will result in each ladder rung holding the least
valuable asset of the two possible, "X if A" if A is not validated, or "p
of X" if it is. By subsidizing markets in all the answers to a question, a
patron can end up with no net risk regarding the question. A minimally
expensive initial assignment to the ladder rungs might be found by a binary
search for the market price, filling in one rung at a time.
Each subsidy sits at a particular asset price, representing some
probability of a claim being validated, so patrons do need to choose which
probability ranges they are most interested in the claim passing through.
Subsidies at prices far from the current price may not be taken for a while,
though a commitment to those subsidies may influence current research
efforts.
The offers in the price ladder cannot stay different from their initial
assignment unless someone else holds risk regarding the question. This
need not be the research labs when a question has long passed research
interest, if other specialist traders can be convinced to hold this risk
until official judging.
Implicitly, any market with non-zero thickness offers information prizes
regarding whatever might be relevant to estimating the future price in that
market. When scientific or technical results might be relevant to ordinary
commodities and financial markets, this can offer incentives for research;
Hirshleifer, for example, suggested that the inventor of the cotton gin
might have done better to speculate in real estate markets than to try to
enforce his patent [Hi].
The current absence of significant markets in assets contingent on basic
research claims does not, however, imply that it is infeasible to complete
markets more in this direction. Betting is now largely illegal and in
social disrepute, and the political market which makes this choice is not
obviously efficient. Technology continues to lower the market transaction
costs, and patrons have probably not considered directly creating such
markets in order to fund research.
Common rules of thumb for guessing which futures or other financial markets
will be regularly traded, and which will not, can be misleading if applied
naively to information prizes. A prize has been awarded for the first
human-power flight across the English channel, but ordinary rules of thumb
would suggest one wouldn't find active markets selling seats on regularly
scheduled flights like this. Similarly, information prizes do not require
active or regular trading in their underlying contingent assets.
An information prize is a lot like an ordinary prize given to the first
person to pay to post a solution to some hard puzzle, a solution can bear
later scrutiny, or like a European patent which is awarded on a
first-to-file, rather than first-to-invent, basis. As with other prizes,
only two actors are need to make an information prize work: a patron to
subsidize a dumb broker on a question, and a researcher who wins the prize
by betting right against this dumb broker. These bets are much like the
first-to-file patent claims or puzzle solutions offered, when contestants
must pay to file such claims. The first person to bet in the right
direction gets the subsidy at that odds, even if no one else competes for
that prize.
While one can imagine prizes so small that no one would bother to search
for them or compete for them, it seems clear that most any question would
be investigated if only a large enough prize were offered on it. Thus the
question isn't so much whether information prizes can work, but rather
regards what is the optimal granularity to tradeoff various costs, and how
effective and costly is the information prize mechanism at the optimal
granularity.
Once this all were legal and respectable, then for science claims relevant
for public policy, such as the greenhouse effect, or which capture the
public imagination, public interest might by itself create substantial
information prizes. Trading in science markets might even allow science
amateurs, or amateur wanna-bes, to feel more involved in the whole process.
And extra speculation activity might provide enough market liquidity to
make feasible new forms of corporate insurance against technological risk.
For example, arbitrage trading should make the current market consensus
approximately self-consistent across a very wide range of questions from
many different fields and specialties. And technical traders should make
price movements approach the ideal random walk, correcting for common human
biases toward over-confidence, anchoring on initial estimates, etc. Such
biases are well-documented in existing research.
As with ordinary prizes, information prizes can allow incentives to be
validated by very distant future judges. Current researchers need only be
able to sell their contingent assets to others who have been convinced to
share estimates of their value. Of course there would be problems for
prizes over time-scales where one might fear for the stability of
organizations who agree to do judging, or of financial institutions who
issue assets.
Improving on ordinary prizes, information prizes can be awarded to many
different contributors, without requiring that these contributors make
prior agreements about how to split the prize, and without these
contributors even being identified. Anyone who has better information
about some question can gain by trading on that question, and trading can
be anonymous.
Researchers can also specialize through conditional trades and offers. For
example, a patron might just subsidize the question of large scale universe
curvature, even though there is a long chain of indirection between this
question and direct observations. Researchers who hold expert beliefs
relating say the Hubble constant, H, to the curvature, but who just rely on
standard estimates of H, might then create conditional offers expressing
their beliefs in curvature conditional on values of H. This would transfer
some of the thickness in the subsidized market to markets in H, and thereby
offer direct incentives for research on H. Estimates of H would depend in
turn on many other issues, so information prizes might be able to support
substantial "subcontracting" for information.
In light of these considerations, patrons might well prefer to fund fewer
large information prizes, on questions most directly related to their
interests, to be judged at great expense in the distant future. They might
expect conditional offers, multiple contributions to the same question, and
perhaps large research labs, to break this down into smaller rewards. But
even if as many information prizes were offered as peer-reviewed proposals
are now funded, the overheads to run the process seem manageable.
With any funding mechanism, researchers have incentives to keep quiet about
special insights until they can establish their reward through the funding
mechanism. For example, researchers may now not publicize their results
until their publication or funding proposal is accepted. Similarly, with
an information prize researchers would not want to publicize results until
they had made all the trades they are willing to risk.
But market trades can be fast, and after they've traded researchers should
want to publicize, if anything, in order to move the market price in the
direction of their trades. If, however, researchers remain substantially
uncertain about the implications of their results, they may still want to
sit on it, similar to the way someone now might sit on information which
they weren't sure how to package into a publication.
Researchers might also have incentives to start false rumors, similar to
they way researchers might now want to mislead their competitors about
promising research paths. False rumors in most any context are best dealt
with by skeptical listeners who consider the track record of their sources.
Imagine that a patron has arranged with some judging organization to offer
a verdict on some question at some future date, if they are given a judging
fee "j of X". If the financial institutions who issue the contingent
assets cooperate, this fee could come by taking a percentage of the total
capital split along a question. So if n units of X have each been split
into "X if A" ..., then each "X if A" can be devalued to become only
"(n-j)/n of X if A". This creates an incentive to rejoin such assets, and
get out of this market, just before such a devaluation. If everyone gets
out, so that n
Imagine a question with a judging fee fraction limit of f, and with a
judging group which has convened with a fee "j of X" they are authorized to
spend. What if, after spending part, but not all, of their fee the judges
decide that to be judged clearly the question really requires a much larger
judging fee, or should be judged much later? If so authorized, the judges
could then declare a new judging date and required judging fee. The
remaining unused judging fee "j' of X" could be returned by revaluing the
current assets on the question, and a remaining judging fee fraction f' =
j'f/j declared to limit future judging devaluations.
A similar approach could be used to clarify a hastily worded question,
after interest in it has greatly increased; judges would convene to do the
rewording, fully expecting to not issue a final verdict. Also, the lottery
approach could be used without judges to coordinate choices of which of
many questions debate might focus on.
As with ordinary prizes, risk tolerances are what ultimately limit the use
of such lotteries. Large research labs might be willing to tolerate the
risk associated with rather expensive judging, perhaps even having large
juries randomly selected from several cultures, trained in certain
specialties, and then deliberating long and carefully. Patrons
particularly concerned about judging bias might even insist on such
expensive judging. Patrons might also subsidize several different assets,
all on the same question but to be judged in different ways. Persistent
differences in the market price of these assets would flag a market
consensus about judging bias that the patron could then investigate more
closely.
If desired, judges might also be kept honest using "appeals" assets, assets
on the same question, but judged by an independent group later and in more
detail. For a limited period after a verdict is announced, the judges
would have to spend up to some fraction of their judging fee trying to move
the price in the appeals market toward the verdict those judges specified.
Judges could end up holding contingent assets saying their verdict would be
upheld in the appeals market.
In addition to the various U.S. federal agencies whose primary task is to
manage and fund basic science research, such as NSF and NIH, there are
other science-based federal agencies, such as FDA and EPA, whose primary
task is more that of consensus generation. The FDA is supposed to evaluate
the effectiveness of various drugs, and the EPA is supposed to evaluate the
harm of various pesticides, and both must take actions with great social
consequences based on these evaluations. Many other government agencies
are similarly tasked to find and act on consensus estimates.
Legal courts also must often evaluate scientific claims offered in support
of criminal or civil cases, evaluating not only what is "true" but also
what people could have been reasonably expected to believe was true. Mass
media function to find and summarize scientific consensus to people who
must make choices under uncertainty. And many professions, such as the
medical profession, function in part to generate and distribute consensus
on information relevant to their client's choices.
These various consensus institutions make use of a variety of supporting
institutions which generate "expert" consensus. Mass media use established
"pundits", and quote from "reports" by established policy think tanks.
Academic organizations often form committees to issue such reports, and
also generate consensus through review articles, textbooks, what they
teach, and what gets published. Professional organizations also often
issue consensus statements for use by their members.
These institutions generate "consensus" not in the sense that they make
everyone agree, but in the sense that they facilitate the formation of
temporary widely shared estimates of the relative weights placed on the
various alternatives. Most simply, but perhaps most important, they
influence which points of view are widely considered "reasonable" or
"defensible", and which are "wacko", "out of the mainstream", etc.
Most of the participants in these institutions are far from disinterested
in the consensus they generate, and most criticism of these institutions is
concerned with possible resulting bias. Mass media or academia are said to
be controlled by politically correct liberals, or are the voice of
corporate capitalism, depending on the critic.
There is not space here for as detailed a discussion of consensus
institutions as was given for funding institutions. But in general, a good
forum for generating consensus would be a "level playing field", mainly
reflecting the views of those who have most considered the issue, if they
largely agree, and rewarding those whose views are later validated. The
consensus generated would ideally be relatively clear and direct, be
revised quickly in response to new developments, and be relatively
difficult for advocates to push in a desired direction by merely spending
more money or by shouting louder and longer.
The market price of a relevant contingent asset seems a good candidate for
such a robust consensus institution. Market estimates on sea levels given
greenhouse emissions, for example, should be direct, responsive, and
difficult to change substantially by mere money or talk.
In fact, challenges to bet, to "put your money where your mouth is", have
long been respected as an antidote for excessive verbal wrangling, and
academic researchers (of near equal stature) often make token reputation
bets. For example, early in the scientific revolution (~1650), chemical
physicians, excluded by the standard physicians from teaching in the
British schools, repeatedly challenged their opponents to bet on their
relative rate of patient deaths [De].
Imagine that mass media, and society in general, deferred to such market
estimates as much as they now defer to market estimates of the relative
value of IBM and AT&T; media rarely and only gently question whether such
prices are over or under priced. Government agencies charged with
generating and acting on consensus, such as on greenhouse gases, could
simply patronize markets in relevant questions, and usually just treat the
market prices as probabilities when choosing actions. Even without such
patronage, public interest in policy relevant questions might be enough to
induce such markets, if they were legal and respected.
Do-gooders seeking social credit for improving public opinion might then
focus their attention on such markets. And statistics on the profitability
of mutual funds based on various pundits predictions might serve as useful
reputation scores. Ordinary folk might even collect and brag about such
reputation scores, with recreational betting on science questions possibly
sparking wider interest in scientific controversies.
Of course contingent asset markets are not perfect consensus institutions.
For example, if different sides of an issue have systematically different
risk preferences, the result may be biased toward the less risk adverse.
And questions whose answers are likely to correlate with how well the
market as a whole does will have probabilities biased toward pessimism, due
to the market risk premium. But overall this seems a promising consensus
institution.
Ordinary prizes focus on specific accomplishments of forseeable value, or
they must allow judging flexibility, which can hide bias. Information
prizes, however, allow patrons to focus on questions they would like
answered, while still minimizing judge flexibility. In addition, related
contingent asset markets can serve as robust institutions for generating
policy relevant consensus, relatively resistant to directly political
influence.
Information prizes work by subisidizing brokers trading in assets
contingent on different possible answers to the funded question.
Information prizes do not, however, require active trading in these assets;
one person alone can win a prize much like someone winning a first-to-file
patent by paying to file a claim before anyone else. There is a real issue
regarding the optimal granularity for information prizes, and the overhead
costs at this optimum relative to other funding mechanisms.
A variety of specific mechanisms have been suggested to allow wide
application of information prizes, and the reduction of overhead costs, and
reasoned speculations have been offered regarding the consequences of wide
use of such prizes. This paper has attempted to make the case for
information prizes plausible, but no more than plausible. More precise
models and experiments, however, might be useful in making a more careful
evaluation.
[By] Byrne, G. (1989) "A Modest Proposal", Science, 244, April 2, p.290.
[De] Debus, A. (1970) Science and Education in the Seventeenth Century,
MacDonald, London.
[Gi] Gilmore, J.B. (1991) "On Forecasting validity and finessing
reliability", Behavioral and Brain Sciences, 14:1, pp.148-149.
[Ha] Hanson, R. (1990) "Could Gambling Save Science? Encouraging an Honest
Consensus" Proc. Eighth Intl. Conf. on Risk and Gambling, July, London.
[He] Heilbron, J. (1982) Elements of Early Modern Physics, U. California
Press.
[Hi] Hirshleifer, J. (1971) "The Private and Social Value of Information
and the Reward to Inventive Activity", American Economics Review, 61:4,
Sept., pp. 561-74.
[Ho] Hofstee, W. (1984) "Methodological Decision Rules As Research
Policies: A Betting Reconstruction of Empirical Research", Acta
Psychologica, 56, pp93-109.
[Le] Leamer, E. (1986) "Bid-Ask Spreads For Subjective Probabilities",
Bayesian Inference and Decision Techniques, ed. P. Goel, A. Zellner,
Elsevier Sci. Publ., pp.217-232
[Mi] Milgrom, P., Roberts, J., Economics, Organizations, and Management,
Prentice Hall, 1992.
[Pa] Pavitt, K. (1991) "What makes basic research economically useful?"
Research Policy, 20, pp.109-119.
[Ro] Roy, R. (1985) "Funding Science: The Real Defects of Peer Review and
An Alternative To It", Science, Technology and Human Values, 10:3
Summer, pp.73-81.
[Tu] Turner, S. (1990) "Forms of Patronage", Theories of Science in
Society, ed. S. Cozzens, T. Gieryn, Indiana U. Press, pp.185-21.
[Wr] Wright, B. "The Economics of Invention Incentives: Patents, Prizes,
and Research Contracts", American Economic Review, Sept 1983, p691-707.
[Ze] Zeckhauser, R., Viscusi, W. (1990) "Risk Within Reason", Science, 248,
May 4, pp.559-564.
Funding
An open market place is simply a place where offers to trade can be
credibly made and taken, without parties needing to be overly concerned
about folks reneging on their trade agreements. A continuous market in an
asset typically has bid and ask prices at each moment in time, a lowest and
highest price offer to buy and sell the asset. The lower the transaction
cost of making and taking an offer, the smaller the spread between the bid
and ask prices should be, and the smaller the amounts of those offers.Trading
The simplest information prize scenario involves one researcher working to
win one prize, and obtaining their reward upon a judge's verdict. But
markets, particularly in contingent assets, offer many opportunities for
specialization not easily supported by other forms of patronage.Judging
An information prize induces a current consensus estimate regarding
probabilities of the various possible answers to its question, in the
market prices. The judging process also produces a consensus, but is not
intended to be the engine of knowledge production. The judges are just
there as a threat, to keep the market traders honest. So rather than have
patrons pay directly for judging, we might prefer incentives for traders to
"settle out of court".Consensus
Perhaps the most promising feature of information prizes, and the
contingent asset markets they use, is that they can function not only as a
patronage institution, but also as a consensus institution. Others have
similarly suggested such a use of betting markets [Br,Ho,Le,Ze].Conclusion
We may be relying too heavily on piggyback accountability when we fund
basic research through proposal peer review, and so we may risk popularity
equilibria wherein the interests of research patrons are largely ignored,
and even the interests of insiders may be greatly diluted by efforts to
signal insidership. But federal funding and the nature of basic research
make if difficult to rely more on accountability by monitoring. Thus we
should consider relying more on contract accountability, and on prizes in
particular.References
[Br] Brunner, J. (1975) The Shockwave Rider, Harper & Row, NY.