Election polls are 95% confident but only 60% accurate, Berkeley Haas study finds

Newswise — How confident should you be in election polls? Not nearly as confident as the pollsters claim, according to a new Berkeley Haas study.

 Most election polls report a 95% confidence level. Yet an analysis of 1,400 polls from 11 election cycles found that the outcome lands within the poll’s result just 60% of the time. And that’s for polls just one week before an election—accuracy drops even more further out.

“If you’re confident, based on polling, about how the 2020 election will come out, think again,” said Berkeley Haas Prof. Don Moore, who conducted the analysis with former student Aditya Kotak, BA 20. “There are a lot of reasons why the actual outcome could be different from the poll, and the way pollsters compute confidence intervals does not take those issues into account.”

 Many people were surprised when President Donald Trump beat Hillary Clinton in 2016 after trailing her in the polls, and speculated that polls are getting less accurate or that the election was so unusual it threw them off. But Moore and Kotak found no evidence of declining accuracy in their sample of polls back to 2008—rather, they found consistently overconfident claims on the part of pollsters.

 “Perhaps the way we interpret polls as a whole needs to be adjusted, to account for the uncertainty that comes with them,” Kotak said. In fact, to be 95% confident, polls would need to double the margins of error they report even a week from election day, the analysis concluded.

 As a statistics and computer science student on an undergraduate research apprenticeship in Moore’s Accuracy Lab during the 2019 presidential primary, Kotak grew curious about the confidence intervals included with polls. He noticed that polls’ margin of error was frequently mentioned as a footnote in news articles and election forecast methodologies, and he wondered whether they were as accurate as their margins of error implied they should be.

 Kotak brought the idea to Moore, who studies overconfidence from both a psychological and statistical perspective. Much of the research on polling accuracy considers only whether the poll correctly called the winner. To gauge poll confidence, they decided to take a retroactive look at polls based on how long before an election they were conducted, and consider not whether a candidate won or lost, but whether the actual share of the vote fell within the margin of error the poll had reported. For example, if a poll showed that 54% of voters favored a candidate, and it had a 5% margin of error, it would be accurate if the candidate garnered 49% to 59% of the vote, but would be a miss if the candidate won with more than 59% of the vote (or less than 49%).

 Moore and Kotak obtained 1,400 polls conducted ahead of the general elections of 2008, 2012, and 2016, as well as the Democratic presidential primaries in Iowa and New Hampshire from 2008 and 2016 and the Republican primaries in the same states from 2012 and 2016. Because some polls asked about multiple candidates, the sample included results of over 5,000 surveys of how people said they’d vote on particular candidates, as well as the accompanying margins of error.

Analyzing the polls in seven-day batches, they found a steady decline in accuracy the farther from an election the poll was conducted, with only about half proving to be accurate 10 weeks before an election. This makes sense, since unforeseen events occur—such as former FBI director James Comey announcing an investigation into Clinton’s emails just a week before the 2016 presidential election. Yet most polls, even weeks out, reported the industry standard 95% confidence interval.

Sampling error and confidence intervals

The confidence interval quantifies how sure one can be that the sample of people surveyed reflects the whole voter population. A 95% confidence interval, for example, means that if the same sampling procedure were followed 100 times, 95 of those samples would contain the true voter population. Therein lies the problem, however.

 The confidence level takes into account “sampling error,” a statistical term that quantifies how likely it is that by pure chance, the sample varies from the larger population of voters from which the sample was drawn. For example, not surveying a large enough group of voters would increase the sampling error. But sampling error does not include any other kinds of errors—such as surveying the wrong set of people to begin with.

 “People often forget that margins of error for polls only capture the statistical sources of error,” said David Broockman, an associate professor in Berkeley’s Department of Political Science. “This analysis shows just how large the remaining non-statistical sources of error are in practice.”

Added Prof. Gabriel Lenz, also of Berkeley Political Science, “This is a fascinating analysis, and future work could sort out the sources of the inaccuracy, such as low-quality pollsters, difficulty screening likely voters, last-minute changes in voter intentions, and more.”

It’s easy to take sampling error into account in polling statistics, but much harder to account for all the other unknowns, Moore said. It’s a lesson that goes far beyond polling.

“Because we base our beliefs on imperfect and biased samples of information, sometimes we will be wrong for reasons that we did not anticipate,” he said.


Filters close

Showing results

110 of 5676
Released: 20-Nov-2020 4:25 PM EST
Those darn property taxes! Insights from Texas tax protests
University of California, Berkeley Haas School of Business

Everyone loves to complain that their taxes are too high. Yet few people actually take the time to formally protest them. A recent deep-dive into property tax appeals in Texas offers new insights on what motivates people to protest or accept their tax obligations.

Newswise: Biden administration vs. COVID-19: U-M experts can discuss
Released: 19-Nov-2020 4:55 PM EST
Biden administration vs. COVID-19: U-M experts can discuss
University of Michigan

University of Michigan epidemiologists are available to discuss the challenges President-elect Joe Biden’s administration will face in combating the coronavirus when he takes the reins in January.To schedule an interview, contact Nardy Baeza Bickel at nbbickel@umich.edu or text 616-550-4531.Emily Toth MartinEmily Toth Martin, associate professor of epidemiology at the U-M School of Public Health, is an infectious disease epidemiologist who has been using COVID-19 public health data to help inform mitigation and policy.

Newswise: NEW: Youth vote up significantly in 2020; young people of color pivotal
Released: 19-Nov-2020 3:40 PM EST
NEW: Youth vote up significantly in 2020; young people of color pivotal
Tufts University

Presidential election turnout among young people ages 18-29 reached 52-55%, significantly higher than the 45-48% turnout of 2016, according to a new youth turnout estimate released today from CIRCLE at Tufts University’s Jonathan M. Tisch College of Civic Life.

Newswise: Making the Best Decision: Math Shows Diverse Thinkers Equal Better Results
Released: 16-Nov-2020 2:55 PM EST
Making the Best Decision: Math Shows Diverse Thinkers Equal Better Results
Florida State University

A Florida State University researcher published a new study today that tackles how groups make decisions and the dynamics that make for fast and accurate decision making. He found that networks that consisted of both impulsive and deliberate individuals made, on average, quicker and better decisions than a group with homogenous thinkers.

Released: 16-Nov-2020 2:05 PM EST
Amid New COVID-19 Surge, PPE Must Be Top Priority Says Critical Care Societies Collaborative
American Thoracic Society (ATS)

In response to the reports of COVID-19 surges around the country, the Critical Care Societies Collaborative, comprising the American Association of Critical‐Care Nurses, American College of Chest Physicians, the American Thoracic Society and the Society of Critical Care Medicine, released the following statement:

Showing results

110 of 5676