USC Dornsife election poll that made headlines in 2016 relaunches with changes
By tracking the same group of poll takers, USC Dornsife’s Daybreak Poll aims to forecast who will be the next president of the United States. (Image Source: Pixabay.)

USC Dornsife election poll that made headlines in 2016 relaunches with changes

The Daybreak Poll got the 2016 presidential election outcome right, but for the wrong reasons. The poll’s survey director explains what’s new with this year’s effort. [4¾ min read]
ByJim Key

During the 2016 presidential election, USC Dornsife College of Letters, Arts and Sciences’ Daybreak Poll routinely made headlines with its predictions. Jill Darling, survey director for the USC Dornsife Center for Economic and Social Research, which conducted the Daybreak Poll, answers questions about polling for the 2020 election cycle.

What are USC Dornsife’s plans for election polling in 2020?

The Center for Economic and Social Research has relaunched its Daybreak Poll from 2016, with an updated model, to track voter support for presidential candidates. We’re also asking voters which party’s candidate they’ll vote for in congressional races — that’s known as the “generic congressional candidate” question.

Just like during the 2016 election, we’ll share updated results each morning through interactive tracking graphs on our website. In addition to the overall results, website visitors can view results by party registration or location.

In conjunction with the USC Dornsife Center for the Political Future, we’re also going to conduct three longer polls that will measure voter opinions and attitudes about candidates and key issues.

What’s the value in asking the “generic congressional candidate” question?

We ask people to tell us the political party of the congressional candidate they plan to vote for because that can help detect potential shifts in party representation in the House and Senate. We asked this question during the 2018 election and had tremendous success predicting the size of the Democratic Party’s new majority representation in the House of Representatives.

What are some of the things that distinguish the Daybreak Poll from other political polls?

We survey a panel of thousands of people who comprise a representative sample of the U.S. population and we continue to survey those same people over the course of the weeks before the election. That allows us to more accurately track changes in support for candidates and positions on issues. Many of the participants in our 2020 panel were with us in 2016, so we’re also able to track changes over four years.

The depth of what we know about the lives of the people we survey is also unique. In addition to knowing about their age, education level, income, location, political leaning, etc.  we will be able to look at how those who have lost jobs or businesses to the pandemic are voting, to give one example.

Finally, our polls are conducted online in English and Spanish. We even provide internet-connected devices to those who don’t have one, which helps us ensure we have people from all walks of life in our sample. Answering questions online may also help some respondents feel more comfortable answering questions honestly than if they had to express their opinions to someone on the phone.  

You recently announced that you’re adding two new polling methodologies this election cycle. Would you describe those?

Sure. This year we’re tracking voter support for presidential candidates using two additional methodologies to study how the results from these methods may differ from the probability-based method we used in 2016 and 2018. The two new methods involve:

  • Asking voters how they expect people in their social circles and state will vote. We’ll be doing this in partnership with a team of researchers from USC, the Santa Fe Institute, and MIT who obtained an NSF grant to study social circle voting. They’ve obtained very accurate results using this method in the past.
  • Asking voters whom they would vote for if the election were held today. This is similar to the question that most other pollsters ask. We also ask a follow-up question about which candidate they are leaning toward if they say they are undecided.

One thing that remains unchanged is our commitment to transparency. We’ll continue to provide detailed information about how our sample was created, how our poll is conducted and weighted, and we will make available our microdata in real time, which as far as we know makes us unique in the world of election polling.

During the last presidential election, the USC Dornsife/Los Angeles Times poll received lots of attention because it was one of the few that suggested a victory for Donald Trump. What did that poll get right and what did it get wrong?

In 2016, we predicted a win for Donald Trump, which would have been right if we had been estimating the electoral college, where Trump won the presidency despite losing the popular vote to Democratic nominee Hillary Clinton. However, our poll, like most other national polls that year, was modeling the outcome of the popular vote.

Have you changed your survey methodology since the 2016 election?

Yes, we have made adjustments. Our post-election investigation found that the incorrect outcome was due to an excess of rural voters in our sample, most of whom voted for Trump. After correcting our weighting procedures to bring urban and rural residents into correct alignment, our data modeled a 1 percentage point Clinton win. We used our revised model to predict the 2018 generic congressional election outcomes, with great success. This year our weighting procedures have also been revised to “trim” the weights, which will prevent rare individuals from carrying a very heavy weight in the sample.

Do you include third-party candidates in your presidential polling?

Yes. In our traditional vote question — if the election were held today for whom would you vote — we ask about the Democratic, Republican, Green, and Libertarian candidates, identified by both name and party.  In our tracking graphs, we combine the vote for the two third-party candidates, along with the vote for any other candidates, because the sample size of people voting for those individual candidates is too small to show in our graphs.