In four puzzling areas of information in politics, simple intuition and simple theory seem to conflict, muddling policy choices. This thesis elaborates theory to help resolve these conflicts.
The puzzle of product bans is why regulators don't instead offer the equivalent information, for example through a "would have banned" label. Regulators can want to lie with labels, however, either due to regulatory capture or to correct for market imperfections. Knowing this, consumers discount regulator warnings, and so regulators can prefer bans over the choices of skeptical consumers. But all sides can prefer regulators who are unable to ban products, since then regulator warnings will be taken more seriously.
The puzzle of voter information is why voters are not even more poorly informed; press coverage of politics seems out of proportion to its entertainment value. Voters can, however, want to commit to becoming informed, either by learning about issues or by subscribing to sources, to convince candidates to take favorable positions. Voters can also prefer to be in large groups, and to be ignorant in certain ways. This complicates the evaluation of institutions, like voting pools, which reduce ignorance.
The puzzle of group insurance as a cure for adverse selection is why this should be less a problem for groups than individuals. The usual argument about reduced variance of types for groups doesn't work in separating equilibria; what matters is the range, not variance, of types. Democratic group choice can, however, narrow the group type range by failing to represent part of the electorate. Furthermore, random juries can completely eliminate adverse selection losses.
The puzzle of persistent political disagreement is that for ideal Bayesians with common priors, the mere fact of a factual disagreement is enough of a clue to induce agreement. But what about agents like humans with severe computational limitations? If such agents agree that they are savvy in being aware of these limitations, then any factual disagreement implies disagreement about their average biases. Yet average bias can in principle be computed without any private information. Thus disagreements seem to be fundamentally about priors or computation, rather than information.
In the four chapters which follow in this thesis, I examine four important and puzzling areas of social information phenomena: paternalistic product and activity bans, voter incentives to become informed, adverse selection regarding collective choices, and the causes of persistent disagreement. In all of these puzzle areas simple intuition seems to be in conflict with simple theory, muddling important policy choices.
To help resolve each conflict between simple intuition and simple theory, I bring more sophisticated theory to bear, to identify plausible but overlooked social processes at work. When possible, I also use this more sophisticated theory to compare welfare across alternative institutions.
All of these puzzle areas are also either centered in or have strong relevance for important political phenomena. Yet they are also all familiar topics in economic and policy analysis. Thus this thesis can be thought of as centered either within formal political theory, within formal analysis of law and policy, or within the economics of information and public institutions.
Pure preference divergence, such as regulatory capture, is an unsatisfying explanation of product bans; why don't captured regulators instead seek direct cash transfers? And how could bans be an attempt to hide the fact of transfers when bans are such a visible and easily monitored action? If, on the other hand, regulators simply have better information than consumers, why don't they just label certain products as "would have banned"?
Chapter 3 shows that either a small degree of regulator capture or a small deviation from fully competitive markets gives regulators a small incentive to lie about product quality. But small lies are corrosive, inducing a great deal of consumer skepticism regarding regulator statements. Thus even ideal regulators want to ban products which are bad enough, rather than live with the choices of ignorant consumers. Similarly parents can want to forbid their skeptical children from engaging in harmful activities.
When regulators and parents are forbidden from banning, however, consumers and children take regulator labels and parental warnings more seriously. And a welfare analysis reveals that for a wide class of cases, all parties on average prefer the outcomes when banning is forbidden. In these cases, bans can be seen as a commitment failure. This suggests that we consider the alternative institution of constitutional prohibitions on product bans, analogous to first amendment prohibitions on media bans.
Many have wondered why independently acting voters in large electorates would have much instrumental reason to vote. After all, such a voter should discount the benefits of voting, but not the costs, by the probability of being pivotal (i.e., that the election is decided by one vote).
By analogy, why would voters acquire political information, such as via the ever-popular political news? Is politics that entertaining or valuable in day-to-day living? Yet actual political choices do not seem to reflect as uninformed an electorate as one might fear. (Though clearly the electorate is much less informed than many would wish.)
The analogy between voting and voter information has limits, however. While voters may find it hard to commit to vote, voters can commit to holding relevant political information, either by just acquiring it or by subscribing to an information source. And candidates who can observe such early efforts should adjust their positions to better favor informed voters. Since this influence is not diluted by the probability of being pivotal, it can give voters strong incentives to become informed.
Chapter 3 also considers voter preferences for being in large vs. small voter groups, where candidate positions cannot distinguish group members. Voters can prefer to be in large groups because scale economies in information production can override free-riding considerations.
Finally, it is shown that a certain type of voter ignorance, which prefers negative to positive news, can benefit voters both individually and collectively, by eliminating wasteful instabilities in candidate positions. This complicates consideration of the alternative institution of voluntarily-formed voting pools, where all the votes of pool members are given to one pool member at random. Such pools can induce better informed voters, but the value of ignorance can make voters want to commit not to join such pools.
While there are obvious tax and overhead-reduction advantages of employer-based health insurance, it is often argued that a key function of group insurance is to avoid adverse selection problems in individual insurance. Many government-imposed restrictions, such as limits on hours of work, are explained similarly; by making a common choice we are said to avoid inefficient personal signaling.
Adverse selection happens in separating equilibria of signaling games. In such games, individuals vary in their innate "type," and good types attempt to distinguish themselves from "bad" types via their actions. A group making a collective choice also has a "type," however, which is the set of its member types. So why don't bad companies buy more health insurance than good companies?
The usual argument one hears is that a group, by averaging over its members, has a lower variance of possible risk types than each member. However, given the usual equilibrium refinements which select full separation, so that each type is distinguished from all the rest, equilibria and their inefficiencies depend only on the support, not the variance, of a distribution of types. Thus the usual argument is highly suspect.
Chapter 4 suggests that the key is not averaging to reduce variance, but limiting participation to narrow the support. For example, because a majority vote can fail to represent up to half of the electorate, this narrows the range of group types which can be inferred from democratic choices. And decisions by a random jury, who fail to represent most of a large group, can in the limit avoid all adverse selection losses from independent individual risks. This suggests an advantage of judge-made laws aimed at excessive signaling, such as liquidated-damages rules.
While honest differences of opinion seem ubiquitous in the world, simple theory suggests the remarkable conclusion that rational agents simply cannot agree to disagree. Consider two ideal Bayesian jurors of the O.J. (Simpson) trial. They start (at birth say) with identical beliefs, receive different private information before and during the trial, learn something about each other's beliefs during deliberation, and finally estimate the chance O.J. did it. If during deliberation these jurors reach a highly common belief about which of them estimates a higher chance that O.J. did it, this turns out to be enough information to allow these jurors to come to nearly the same estimates regarding O.J.'s chances.
Most researchers who are dissatisfied with explaining apparent disagreement as due to different initial (i.e., prior) beliefs, or as due to posturing by people who really agree, have looked to bounded rationality as the explanation. And it does seem that the calculations required of an ideal Bayesian are typically far beyond the ability of mere mortals.
Existing research on bounded rationality, however, has either assumed very specific computational strategies, or has stayed general at the cost of allowing nearly as much computation as an ideal Bayesian requires. For example, some models assume that agents know anything a Turing machine can compute in any finite time, and other models assume that agents can compute exact expected values over vast state-dependent sets of possible states, sets which satisfy various strong axioms.
In contrast, chapter 5 allows agents to make arbitrary state-dependent computational errors. These agents are constrained only to be savvy in the sense of being aware of certain easy-to-compute implications of the fact that agents make such errors. Even this minimal degree of rationality, however, implies that agents with common priors who agree to disagree about O.J.'s chances must agree to disagree about each of their average bias when making such estimates.
Since average bias could in principle be computed independently of private information, this situation is a computational disagreement, similar to a situation where one agent always estimates pi to be 3.14, another always estimates pi to be 22/7, and both are fully aware of the others' alternative method. It seems that disagreements are about computation or priors, not information.