Can these questions continue to make political polls more accurate?
Polls aimed at predicting election results are an American tradition, and innovative methods like those employed by the USC Dornsife Daybreak Poll could make them more accurate. (Image Source: iStock.)

Can these questions continue to make political polls more accurate?

An independent task force recently evaluated nearly 2,900 presidential polls from 2020 and concluded they were the least accurate in 40 years — but they didn’t consider innovative polling methods like the one used in USC Dornsife Daybreak Polls. [5 min read]
ByJim Key

The day before last November’s presidential election, two types of findings from the USC Dornsife Daybreak Poll were released. Both predicted a popular vote victory for Joe Biden, but the finding from a relatively new poll question proved to be a much more accurate predictor of the election outcome than the traditional question of “who do you plan to vote for?”

Could this innovative methodology, which asks people who their friends and family plan to vote for, be the key to more accurate polling results?

The primary finding from the final USC Dornsife Daybreak Poll of 2020, based on data from the traditional polling question, was that Joe Biden would defeat Donald Trump by a margin of 10 percentage points, 53% to 43%. Biden’s actual margin of victory was 4.5 points.

Most other polls underestimated Trump’s support, as well. A study commissioned by the American Association of Public Opinion Research (AAPOR), which evaluated nearly 2,900 presidential polls conducted during 2020, concluded they suffered from errors of “unusual magnitude”  the highest in 40 years. On average, the polls overstated Biden’s margin of victory by 3.9 points. Polls tracking state-level races were even worse.

Why were they so far off? The independent task force that conducted the analysis was stymied. “Identifying conclusively why polls overstated the Democratic-Republican margin, relative to the certified vote, appears to be impossible with the available data,” the report says.

“There was nothing usual about the 2020 pandemic election,” said Jill Darling, survey director for the USC Dornsife Center for Economic and Social Research (CESR) and its Daybreak Poll. “One of the many differences was that voting by mail increased exponentially, but only among certain groups, while the U.S. Postal Service faced unprecedented delivery challenges. That could be one of many factors that contributed to it being such a terrible year for traditional polling methodologies.”

New questions, better accuracy

The AAPOR task force didn’t review the finding from a relatively new polling method that proved to be much more accurate than most 2020 presidential polls.

More than 10 weeks before the 2020 election, CESR announced that it would share data from the traditional “who will you vote for?” polling question in its Daybreak Poll as well as from questions that asked survey respondents the probability they will vote for each presidential candidate and the percentage of their family and friends (that is, their “social circle”) who will vote for each candidate.

This social circle methodology for predicting election outcomes was developed and previously tested by a team of researchers from USC, the Santa Fe Institute, and the Massachusetts Institute of Technology.

The evening before the 2020 election, the research team that developed the social circle question posted a prediction of the election outcome online. Their prediction for Biden’s victory over Trump, based on an analysis of data through that day, was just 1-point off the actual margin, while on average, 2020 polls were off by 3.9 points.

It wasn’t the first time this methodology had proven to be effective. In the two previous national elections (2018 and 2016), the social circle methodology was a better predictor of election results than asking people about their voting intentions. It also more accurately predicted national elections in France (2017)Netherlands (2017), and Sweden (2018).

The results were even more accurate when combined with data from other nontraditional questions, such as who the survey respondents think will win the election.

“We weren’t particularly surprised to see how accurate our prediction was, but we are gratified to have more evidence that our methodology is a consistent predictor of election outcomes,” said Wändi Bruine de Bruin, Provost Professor of Public Policy, Psychology, and Behavioral Sciences at the USC Dornsife College of Letters, Arts and Sciences and USC Price School of Public Policy. She’s part of the team that developed and has been testing the social circle methodology.

Why is it proving to be so accurate?

“We believe there are three reasons,” said Santa Fe Institute External Professor Henrik Olsson, a cognitive scientist and another member of the team that developed the methodology. “First, when we ask people about the voting intentions of their social network, we’re implicitly increasing the poll’s sample size, collecting data associated with potential voters who might have declined to participate in the poll or are difficult to reach.”

Previous research conducted by Olsson and Santa Fe Institute Professor of Human Social Dynamics Mirta Galesic, who also helped develop the social circle question, suggests that an individual’s predictions regarding how their friends and family will vote are generally accurate.

“Second, there’s evidence some people misreport their true voting intentions to pollsters or decline to participate in election polls because of embarrassment, fear of harassment, or willing obstruction of pollsters,” said Galesic. “Finally, we learned that people’s estimates of how their social circle plans to vote today can provide hints about how social influences will change their voting intentions, before the election.”

Future of the social circle question

The group that produced the AAPOR report regarding last year’s political polling wasn’t charged with evaluating the overall performance of specific polls, but the fact that they didn’t mention potential remedies for the inaccuracies was a surprise to Olsson, Galesic and their colleagues.

Encouraged by last year’s results, the researchers are looking for ways to integrate data from other questions into algorithms that could result in even better predictions of election outcomes, including the Electoral College.

Last year they combined data from the social circle question with responses from a question that asked survey respondents which presidential candidate they thought would win their state. Since the Daybreak Poll is a national poll that’s not designed to predict election outcomes in individual states, they had very limited state-level data and mistakenly predicted a narrow Electoral College victory for Trump. In the future, larger state polls that incorporate the social circle question could have better accuracy.

“Continued research on these new methods and questions could result in even greater confidence regarding their predictive capability,” said Galesic, “helping the polling community be more accurate and regain the public’s trust in predicting elections.”