Why the USC Dornsife/L.A. Times presidential poll is unlike other polls
The scientists behind the USC Dornsife/Los Angeles Times Presidential Election Daybreak Poll have answered a list of frequently asked questions about the national probability tracking poll. Many media observers have labeled the poll as an “outlier” because its results have differed from others’ since its July debut.
The results of the Daybreak Poll are updated nightly and are publicly available for downloading from the election.usc.edu website. It is run in partnership with the L.A. Times and two USC research centers: the Center for Economic and Social Research and the Jesse M. Unruh Institute for Politics, both at USC Dornsife.
The scientists behind the poll who responded jointly to these questions are Dan Schnur, director of the Unruh Institute and assistant professor of the practice of political science, and Arie Kapteyn, director of CESR and professor (research) of economics, and CESR Survey Director Jill Darling.
What makes this a “probability” poll and is it unique?
The Daybreak Poll is a probability survey. Its methodology aims to provide a best estimate of how America plans to vote in the November election based on the expressed intent of potential voters. The poll’s probability approach seeks to measure participants’ level of certainty in their plans to vote and the intensity of their commitment to a candidate, rather than simply their preference for one candidate. The Daybreak Poll is one of only a few such daily probability polls that exist in the country.
What questions are asked of participants?
Most surveys ask respondents whom they will vote for. The Daybreak Poll instead asks eligible voters in the Understanding America Study election panel every day online: What is the percent chance that:
- You will vote in the presidential election?
- You will vote for Hillary Clinton, Donald Trump or someone else?
- Clinton, Trump or someone else will win?
Participants rate their chances of voting and the candidates’ chances of winning on a 0 – 100 percent scale. In addition, to learn more about what lies behind their likely vote, respondents each week are also asked one or two extra questions about their preferences or values.
Is the poll weighted? Why?
Yes. To ensure representativeness, results are weighted on demographic variables, such as gender, race and socioeconomic variables from the U.S. Census Current Population Survey, as well as on how participants said they voted in the 2012 presidential election. For example, the poll adjusts the proportion of men and women to 48 percent and 52 percent, respectively.
Who are the poll’s participants?
More than 3,200 participants in the larger Understanding America Study are on the election panel for the Daybreak Poll. Of those, one seventh — nearly 450 people — are invited daily to participate in the Daybreak Poll to ensure a balanced sample. The participants are 18 and older, sampled to be representative of all eligible voters across the United States.
What makes this a “tracking” poll?
The poll checks in once a week with the same group of people to measure changes in their opinion up until they vote in November. Results are updated nightly at midnight online. The results are based on a continuous, rolling seven-day average.
Does the poll use a likely voter model?
No; however, more weight is given to voters who express a greater degree of certainty that they will vote in November. Rather than assume that every voter is equally sure of his or her allegiance to one candidate or the other and is just as certain to vote, the Daybreak Poll asks participants to rate their likelihood of voting and voting for each candidate. To estimate the vote, we calculate the ratio of a participant’s likelihood of voting to his or her likelihood to vote for a candidate.
Sometimes the Daybreak Poll results do not resemble other poll results. Why?
Like any other election poll, the Daybreak Poll is designed to estimate the presidential election outcome. It uses an innovative approach that makes it difficult to compare the results with other polls’ results.
Some of these differences include:
- Our questions allow voters to express a greater level of uncertainty than traditional poll questions which force respondents to choose a single candidate or say that they “don’t know.” For example, in the Daybreak Poll, a respondent might say that they are 60 percent for Clinton and 40 percent for Trump.
- We include voters who did not vote in the prior presidential election and others who may be considered “less likely to vote.” Many traditional polls exclude some or all of these voters. We weight our results based on turnout in the 2012 election to account for differential response rates. We used this approach very successfully to predict the 2012 election outcome. Our final prediction was a 3.32-point advantage for Obama. The final tally of the popular vote showed a 3.85-point advantage. Our prediction was at least a couple of points more in Obama’s favor than most of the other tracking polls.
- The panel of respondents answer the survey questions once a week, versus traditional polls that collect data periodically.
- Our poll is internet-based. Studies have indicated that people are sometimes more honest about their voting preferences when completing internet surveys than they are in phone surveys.
- We randomly recruit participants from households nationwide, providing internet service and an internet-connected tablet computer to those who do not already have it. Therefore, our sample includes a proportion of individuals who cannot be represented in web-based polls that do not provide such access.
- Our participants can answer poll questions at any time of the day or night. We give them seven days to do it; thus, our sample is likely to include more voters who are difficult to reach by phone during traditional telephone polling hours.
Why are there temporary “bumps” in candidate support in the subgroup charts? Why do the gray bars increase?
The Daybreak Poll uses a complex weighting scheme to ensure results are as representative of the population as possible. Therefore, underrepresented groups are assigned a greater weight than overrepresented groups. If members of an underrepresented group change their voting intention or vote intensity, this may have a larger effect on results than when other respondents change their preference. In the charts, we take this added variability into account. An example can be seen in our African-American voter chart. If respondents assigned higher weights “vote,” the chart may show a bump or a dip while the width of the confidence interval — the gray zone between the red and blue lines — increases.
It is important to note that some of the subgroups that we chart have a modest number of observations. Due to this limitation, we advise against over-interpreting day-to-day changes. The inclusion of a small number of individuals within a group may appear to have sizable effects, but the overall prediction of the popular vote’s outcome is much less affected. Indeed, the confidence interval for the overall prediction is much smaller but wide enough that we cannot confidently say who will win the popular vote. For more information on weighting in the Daybreak Poll, consult our detailed methods document.
Is the Daybreak Poll more accurate than other polls?
Four years ago, the team responsible for the Daybreak Poll had developed the successful RAND Continuous Presidential Election Poll based on the same methodology, with very successful results. We believe that our sample this year is a reliable representation of the U.S. eligible voter population and that our methods are based on sound science. Is our approach more or less reliable and accurate than other polls? Only time will tell.
Why are you doing this?
The Daybreak Poll is part of an ongoing experiment to study whether the methods we have helped pioneer are useful in increasing the accuracy of election polling. To this end, we practice full transparency. We publish our findings on a daily basis, provide full public access to our data even as the election is underway, and we publish detailed documents that allow other scientists to reproduce our model or create their own in real time or later once election results are known. Other investigators have added questions to the poll that allow them to study election behavior. Their findings may also contribute to the accuracy of election polling.
What if the poll turns out to be wrong?
Public opinion polling is not an exact science, but through research it could become more so. Whether or not our poll results closely reflect the election outcome, we will review our methodology and improve it for the next election, sharing our findings and our data with other researchers. One of the areas of intense interest for us is in comparison of self-reported vote to actual vote, and we will follow up this poll with analysis in that area, among others.
Additional FAQs have also been published by the poll’s media partner, the Los Angeles Times.