August 13, 2024

Polling 101: How accurate are election polls?

Don A. Moore

Featured Researcher

Don A. Moore

Professor, Management of Organizations

Photographs by

Video by Jordan Joseffer

A pile of 2024 election buttons with American flag backdrops.

Image: AdobeStock

Between the assassination attempt on former President Donald Trump, the announcement by President Joe Biden that he’ll step aside, the candidacy of Vice President Kamala Harris, and her announcement of running-mate Tim Walz, the 2024 presidential campaign so far has been full of unexpected plot twists. 

And there’s still 12 weeks to go before the general election.

So what to make of the myriad election polls offering near-daily updates on which candidate seems to be gaining—or losing—an edge? 

We sought insights from Professor Don Moore, the Lorraine Tyson Mitchell Chair in Leadership and Communication and an expert on overconfidence, and Aditya Kotak, BA 20 (Statistics, CS), a former research apprentice in Moore’s Accuracy Lab. Kotak teamed up with Moore after growing curious about the confidence intervals that are often listed in fine print below polls. They published their analysis in 2020.  

First off, brief us on your headline finding.

Don Moore: Most polls report a 95% confidence interval. But we found that the actual election outcome only lands inside that interval 60% of the time—and that’s just a week before the election. Further out, the hit rates fall even farther. Polls taken a year before the election stand only a 40% chance of getting the vote share from the actual election inside the poll’s 95% confidence interval.  

Why do polls include a margin of error and a confidence interval, and what do they tell us? 

Aditya Kotak: Both the margin of error and the confidence interval capture “sampling error,” which represents how much the poll’s sample population might differ from the true population of voters. A confidence interval refers to how often the result is expected to fall within the range of the margin of error. It’s important to note, though, that the margin of error only reflects expected imperfections in random sampling, and ignores other sources of error.

You describe sampling error as a “statistical error.” What are some of the non-statistical errors that can make a poll less accurate?

Moore: Sometimes there is bias generated by the method by which pollsters reach respondents. If it’s random-digit dialing, for instance, it will only reach people who have phones and who answer them when pollsters call. If those people are different from those who vote, then the poll’s predictions might be biased.  

In your paper, you conclude that in order for polls to be 95% accurate just a week before an election, they should double the margin of error they report. Give an example.

Kotak: Let’s say a candidate is polling at 54% a week before the election, with a margin of error of plus-or-minus 3%. The 95% confidence interval implies a 95% chance that the candidate will win 51% to 57% of the vote. Our analysis shows that in reality, you’d have to double that margin of error to plus or minus 6% to get 95% accuracy. That means that the outcome is less certain; the candidate is likely to get anywhere from 48% to 60% of the vote.

Don, as an expert on overconfidence, why do you think the confidence intervals reported by pollsters are so consistently overconfident? Why don’t they increase the margins of error?

Moore: I think poll results are overconfident for many of the same reasons that everyday human judgments are overconfident: We are wrong for reasons that we fail to anticipate. The electorate is changing, and prior elections are imperfect predictors of future elections. When we are wrong about the future and don’t know it, we will make overconfident predictions.  

How can the voting public gauge which polls are more accurate?

Moore: The voting public will have difficulty gauging poll accuracy–pollsters have difficulty gauging poll accuracy! But we can tell you what the voting public should NOT do: Choose to selectively believe the poll results that favor their political preferences. If a poll is inaccurate, it is less likely to be due to intentional bias by pollsters, and more likely due to the inherent uncertainty in polling.  

You looked at polls from the 2008, 2012, and 2016 general elections, as well as primaries in 2008, 2012, and 2016. Did you find any evidence that polls accuracy has grown less accurate over time?

Kotak: No, we did not see any statistically significant difference in individual poll accuracy over the various election years. When breaking out our data by year, each election cycle showed the same trend towards 60% accuracy a week before an election, with no significant variation from one year to another.

Do your findings apply to aggregated polls, such as those collected by FiveThirtyEight? Should we be more confident in those averages?

Moore: No, and this is an important point. Poll aggregators like FiveThirtyEight try to adjust for the unreliability in individual polls. 

Kotak: Often, these aggregators use custom methodology to account for individual polling inaccuracies. Our research suggests that there is indeed a need for such adjustments, but we did not review their methodologies to opine whether they are more or less reliable. 

Given the uncertainty in this year’s election, do you expect the polls to be even less accurate than usual? 

Moore: No, we have little reason to expect the problem to be getting worse. Polls have always been flawed. Don’t listen to the vapid “horse race” coverage of candidates’ standing in the polls. Instead, pay attention to the candidates’ stances on the issues and plans for what they will do in office.

Video Transcript  

Don Moore: I’m Don Moore. I’m a professor here at the Haas School of Business.

Aditya Kotak: I’m Aditya Kotak. I’m a Cal Class of 2020 alum in statistics.

Don: You are gonna see a lot of results from polls—political polls—collected between now and the election in November. We were interested in the accuracy of polling results. And so we did this study back in 2020.

Aditya: The question we sought to ask was: How often is the true election result falling within the 90%, 95% confidence interval that these polls usually publish when they include a margin of error. So what we did is we looked through over 6,000 different polls that were published in the past several election cycles and did an analysis to see how many times that the polls’ margin of error actually included the final election outcome.

And what we found was that just in the weeks prior to an election, a 95% confidence interval really only captured the true election outcome 60% of the time.

Don: For a poll to be accurate, its 95% confidence interval should include the truth 95% of the time.  That confidence interval reflects the uncertainty around the poll’s results that come from sampling just a subset of the larger voting population.

Now, our results suggest that in order to capture the actual election result 95% of the time, a poll taken a week before the election would have to double its margin of error.

So when you see a poll result that indicates it’s accurate to plus or minus 3 percentage points, which is standard for a poll taken with approximately 800 likely voters, you would need to grow that for a poll taken the week before the election to at least 6 percentage points above or below the forecast result.

And the further you go ahead of the election, the wider you’d have to make that confidence interval such that it includes the actual election result 95% of the time.

Aditya: So if you’re looking at a polling and asking yourself how confident should you be? Well, as we know a week is an eternity in politics and more often than not, you should stay skeptical when looking at those poll results.

 

Posted in:

Topics:

Tagged: