Predicting the Future

Prof. Don Moore helps turn ordinary citizens into accurate forecasters of geopolitical events

Predicting the Future

Forecasters often overestimate how good they are at predicting geopolitical events—everything from who will become the next pope to who will win the next national election in Taiwan.

But Berkeley Haas Professor Don Moore and a team of researchers found a new way to dramatically improve forecast accuracy by training ordinary people to make more confident and accurate predictions over time as superforecasters.

The team, working on The Good Judgment Project, tested its future-predicting methods during a four-year, government-funded geopolitical forecasting tournament sponsored by the United States Intelligence Advanced Research Projects Activity. The tournament, which began in 2011, aimed to improve geopolitical forecasting and intelligence analysis by tapping the wisdom of the crowd. Moore’s team—which combined best practices from psychology, economics, and behavioral science from researchers nationwide— proved so successful in the first years of the competition that it bumped the other four teams, becoming the sole remaining funded project.

The study was unique in that it examined accuracy in forecasting over time, using a huge data set: 494,552 forecasts by 2,860 forecasters predicting the outcomes of hundreds of events.

Study participants, a mix of scientists, researchers, academics, and other professionals, weren’t experts on what they were forecasting; rather, they were educated citizens who stayed current on the news.

Their training included four components: considering how often and under what circumstances a similar event to the one they were considering took place, averaging across opinions to exploit the wisdom of the crowd, using mathematical and statistical models when applicable, and reviewing biases in forecasting—in particular the risk of both overconfidence and excess caution in estimating probabilities.

Over time, this group answered a total of 344 specific questions about geopolitical events. All of the questions had clear resolutions, needed to be resolved within a reasonable time frame, and had to be relatively difficult to forecast.

The majority of the questions targeted a specific outcome, such as “Will the United Nations General Assembly recognize a Palestinian state by September 30, 2011?” or “Will Cardinal Peter Turkson be the next pope?”

The researchers asked participants to self-assess whether they considered themselves experts on questions. By the end of the tournament, researchers found that on average, the group members reported that they were 65.4 percent sure that they had correctly predicted what would happen. In fact, they were correct 63.3 percent of the time.

In addition, as participants gathered more information, both their confidence and their accuracy improved. In the first month of forecasting during the first year, confidence was 59 percent and accuracy was 57 percent. By the final month of the third year, confidence had increased to 76.4 percent and accuracy reached 76.1 percent.

“What made our forecasters good was not so much that they always knew what would happen, but that they had an accurate sense of how much they knew,” the study concluded. “We see potential value not only in forecasting world events for intelligence agencies and governmental policy-makers but for innumerable private organizations that must make important strategic decisions based on forecasts of future states of the world,” the researchers concluded.

Back