How Not To Predict The Future

Good forecasting thrives on a delicate balance of math, expertise, and…vibes.

Molly Hickman in Asterisk: Predicting the future is difficult, but not impossible — and some people are much better at it than others. This insight has spawned a community dedicated to developing better and better methods of forecasting. But while our techniques have become increasingly sophisticated, even the best forecasters still make mistakes.

In my work as an analyst for the Forecasting Research Institute, and as a member of the forecasting collective Samotsvety, I’ve had plenty of opportunities to see how forecasters err. By and large, these mistakes fall into two categories. The first mistake is in trusting our preconceptions too much. The more we know — and the more confident we are in our knowledge — the easier it is to dismiss information that doesn’t conform to the opinions we already have. But there’s a more insidious second kind of error that bites forecasters — putting too much store in clever models that minimize the role of judgment. Just because there’s math doesn’t make it right.

Forecasters versus experts

The first scientific study of judgmental 1 forecasting was conducted in the 1960s by a gentleman at the CIA named Sherman Kent. Kent noticed that in their reports, intelligence analysts used imprecise phrases like “we believe,” “highly likely,” or “little chance.” He wanted to know how the people reading the reports actually interpreted these phrases. He asked 23 NATO analysts to convert the phrases into numerical probabilities, and their answers were all over the place — “probable” might mean a 30% chance to one person and an 80% chance to another. Kent advocated the use of few consistent odds expressions in intelligence reports, but his advice was largely ignored. It would take another two decades for the intelligence community to seriously invest in the study of prediction. 

The modern forecasting community largely emerged from the work of one man: Philip Tetlock. In 1984, Tetlock, then a professor of political science at UC Berkeley, held his first forecasting tournament. His goal was to investigate whether experts — government officials, journalists, and academics — were better at making predictions in their areas of interest than intelligent laypeople. Over the next two decades, Tetlock asked both experts and informed laypeople to make numerical predictions of the likelihoods of specific events. The results were published in his 2005 book, Expert Political Judgment: How Good Is It? How Can We Know? The upshot: Experts make for terrible forecasters. 2  

Tetlock’s work helped inspire the ACE Program forecasting tournament run by the Intelligence Advanced Research Projects Activity (IARPA), a research arm of the American intelligence community. In the first two years of the tournament, Tetlock’s team won so handily that IARPA canceled the remaining competitions.

If expertise doesn’t make for accurate predictions, then what does? Among other things, the best forecasters have a quality that Tetlock borrowed from the research of psychologist Jonathan Baron: “active open-mindedness.” Instead of operating on autopilot, those who score high in active open-mindedness take into consideration evidence that goes against their beliefs, pay attention to those who disagree with them, and are willing — even eager — to change their minds. It can be particularly difficult for subject matter experts, who may be heavily invested in particular narratives, to fit new evidence into their existing worldview. It’s active open-mindedness that separates these “superforecasters” from the chaff.

More here.

VIDEO: The Voynich Manuscript