Confidence Numbers in Science Journalism

by Robin Hanson

There are a number of controversies in science journalism that surround various divergences between the confidence that journalists may/should have in a reported result and the recommended confidence readers may infer from press coverage. Coverage of breast implants, the mars rocks, cold fusion, global warming, ALAR on apples, and cloning have highlighted such issues.

Some stories, such as the "orgasm pill", are considered "too good to check". Should one run a story before it has been peer reviewed? Or is any such selectivity censorship? Should journals refuse to publish articles that have already been discussed in the media? When most experts that journalists contact are on one side of an issue, and yet most of the coverage is on the dissenters, does that distort things?

Journalists often throw up their hands and paint these situations as forums where it is important that all political sides get heard, where the new and unusual view is the real "news", and as tradeoffs between the simple fast answers the public wants and the complexity and caution of experts and the trouble it takes to learn and describe all that context and uncertainty.

However, it seems to me that in such situations the public could get a simple fast answer that also acknowledges uncertainties, avoids the biases described above, and yet requires little more effort from journalists. All the journalists need to do is to give a numerical estimate of the confidence they have that the result will hold up under further scrutiny. They could, for example, pass on to their readers an exciting but questionable story like the orgasm pill, ALAR, cold fusion, etc., but say that they think there's less than a 10% chance that it will hold up.

If the scientific community found that these estimates were on the whole reasonable, they needn't try to prohibit press coverage before publication in an attempt to correct for perceived biases. And people who thought that a certain sources was systematically unreasonable could look at the track record of these numbers. Media companies who feared such auditing could do internal auditing of their own reporters, compare their performance to their competitors, and brag when they do better.

This approach clearly allows uncertainty to be acknowledged and other biases to be corrected for without burdening readers with more context. And it needn't take much more work for reporters because, by the time a reporter writes such a story, reporters have already long formed and repeatedly updated an opinion of what they think the chances are. They used this opinion, for example, in selecting people to interview and to quote and in deciding how hard to push for this story. They just have to learn how to write down a number.

I have in the past heard people propose that pundits and experts who make claims to the media, or which are quoted in the media, aught to give such numerical estimates of confidence. But while this isn't an unreasonable thing for them to do, I think this proposal focuses on the wrong players.

The weakest tie is not between reporters and experts but between readers and reporters. The media-public relation also has the most repeated interaction and brand names, both of which strengthen the prospects for improvements from numerical estimate reputations. Reporters tend to form strong judgements about which pundits and experts they trust, and yet don't have that much repeated interaction with any one expert.

Experts don't typically make enough public estimates in time to for a track record to substantially affect what specific reporters think of them, especially since reporters have so many other informal social mechanisms to help them figure out who they trust. Media outfits with brand/reputation to protect, on the other hand, can be expected to many such estimates over the course of a few years, and readers/viewers don't really have many good other ways to estimate the quality of a branded media source.

As with any new proposal, a big question is: if this is so great, why hasn't anyone done it yet? Part of it may be this image that journalists are trying to be objective and not reflect their own opinion, which most people in the know know is bunk, but which the media seems to want to try to project anyway. And there is the mental block people trained in the humanities have to expressing their opinions as numbers.

Perhaps also any one media outfit is reluctant to be the first to do this, if they already think their readers trust them. Or maybe they fear that to do this would be to acknowledge fears that their readers don't trust them, which would publicly flag them as admitting to not being a top media outfit.

As a final note I think that if such media numerical confidence estimates because widespread, that would greatly strengthen the demand for something like idea futures markets on such questions. Journalists would probably rather report a confidence number from a market than to put their neck out and state their own opinion.


Robin Hanson May 1, 1997
known by AltaVista