Forecasters often overestimate how good they are at predicting geopolitical events—everything from who will become the next pope to who will win the next national election in Taiwan.
But Berkeley-Haas management professor Don Moore and a team of researchers found a new way to dramatically improve forecast accuracy by training ordinary people to make more confident and accurate predictions over time as superforecasters.
The team, working on The Good Judgment Project, had the perfect opportunity to test its future-predicting methods during a four-year government-funded geopolitical forecasting tournament sponsored by the United States Intelligence Advanced Research Projects Activity. The tournament, which began in 2011, aimed to improve geopolitical forecasting and intelligence analysis by tapping the wisdom of the crowd. Moore’s team proved so successful in the first years of the competition that it bumped the other four teams from a national competition, becoming the only funded project left in the competition.
Some of the results are published in a Management Science article “Confidence Calibration in a Multi-year Geopolitical Forecasting Competition.” Moore’s co-authors, who combine best practices from psychology, economics, and behavioral science, include husband and wife team Barbara Mellers and Philip Tetlock of the University of Pennsylvania, who co-lead the Good Judgement Project with Moore; along with Lyle Unger and Angela Minster of the University of Pennsylvania; Samuel A. Swift, a data scientist at investment strategy firm Betterment; Heather Yang of MIT; and Elizabeth Tenney of the University of Utah.
The study differs from previous research in overconfidence in forecasting because it examines forecast accuracy in forecasting over time, using a huge and unique data set gathered during the tournament. That data included 494,552 forecasts by 2,860 forecasters who predicted the outcomes of hundreds of events.
Wisdom of the crowd improves forecast accuracy
Study participants, a mix of scientists, researchers, academics, and other professionals, weren’t experts on what they were forecasting, but were rather educated citizens who stayed current on the news.
Their training included four components:
- Considering how often and under what circumstances a similar event to the one they were considering took place.
- Averaging across opinions to exploit the wisdom of the crowd.
- Using mathematical and statistical models when applicable.
- Reviewing biases in forecasting—in particular the risk of both overconfidence and excess caution in estimating probabilities.
Over time, this group answered a total of 344 specific questions about geopolitical events. All of the questions had clear resolutions, needed to be resolved within a reasonable time frame, and had to be relatively difficult to forecast—“tough calls,” as the researchers put it. Forecasts below a 10 percent or above a 90 percent chance of occurring were deemed too easy for the forecasters.
The majority of the questions targeted a specific outcome, such as “Will the United Nations General Assembly recognize a Palestinian state by September 30, 2011?” or “Will Cardinal Peter Turkson be the next pope?”
The researchers wanted to measure whether participants considered themselves experts on questions, so they asked them to assess themselves, rating their expertise on each question on a 1-5 scale during their first year. In the second year, they placed themselves in “expertise quintiles” relative to others answering the same questions. In the final year, they indicated their confidence level from “not at all” to “extremely” per forecast.
Training: Astoundingly effective
By the end of the tournament, researchers found something surprising. On average, the group members reported that they were 65.4 percent sure that they had correctly predicted what would happen. In fact, they were correct 63.3 percent of the time, for an overall level of 2.1 percent confidence. “Our results find a remarkable balance between people’s confidence and accuracy,” Moore said.
In addition, as participants gathered more information, both their confidence and their accuracy improved.
In the first month of forecasting during the first year, confidence was 59 percent and accuracy was 57 percent. By the final month of the third year, confidence had increased to 76.4 percent and accuracy reached 76.1 percent.
The researchers called the training the group received “astoundingly effective.”
“What made our forecasters good was not so much that they always knew what would happen, but that they had an accurate sense of how much they knew,” the study concluded.
The research also broke new ground, as it is quantitative in a field that generally produces qualitative studies.
“We see potential value not only in forecasting world events for intelligence agencies and governmental policy-makers, but innumerable private organizations that must make important strategic decisions based on forecasts of future states of the world,” the researchers concluded.
Read additional research by Moore on managers’ hiring decisions.