‘Clear losers’ in 2020 election: Pollsters

By Around the Web


[Editor’s note: This story originally was published by Real Clear Investigations.]

By Mark Hemingway
Real Clear Investigations

Weeks after the 2020 elections, several races have yet to be called, but a clear loser has emerged – pollsters. The polls understated the percentage of the Republican vote in 48 of 50 states, and 15 of the 16 Senate races, according to an analysis from Ohio State University political scientist Thomas Wood.

At the national level, President Trump outperformed the polls significantly. Major surveys in the last week before the election, including Economist/YouGov, CNBC/Change Research, Quinnipiac, and NBC News/Wall Street Journal, all had Joe Biden winning in a landslide – by 10 percentage points or more. Biden’s actual margin of victory in the popular vote is less than four points

The polls also underestimated Trump’s support in most swing states, where around 100,000 votes in a few battlegrounds account for Biden’s victory. In particular, Biden won Wisconsin by some 10,000 votes — less than a single percentage point — but polls from Reuters/Ipsos and the New York Times/Siena had him up double digits. An ABC News/Washington Post poll done in the Badger State the week before the election had Biden up 17 points.

Polling failures were also manifest in key Senate races. Republican incumbent Susan Collins was trailing in all 14 major polls conducted in the closely watched Maine contest. A Quinnipiac poll in September had Collins down a whopping 12 points. She won by nine. North Carolina’s GOP incumbent, Thom Tillis, ended up with a clear win despite trailing in the last five polls taken.

In the House, the final projection of polling experts at FiveThirtyEight showed Democrats expanding their majority. Instead, Republicans have picked up nine seats and are leading in several of the still-undecided races.

There’s simply no way to spin the terrible performance of pollsters this year. “Let’s just call a spade a spade here: this was a bad polling error,” said New York Times’ polling reporter Nate Cohn. “It’s comparable to 2016 in size, but pollsters don’t have the excuses they did last time.”

The failures are particularly striking in how they echo those of 2016, which included leading election analysts as well as pollsters. On the eve of that election, the New York Times reported that state and national polls indicated Hillary Clinton had an 85% chance of beating Donald Trump; Reuters/Ipsos predicted she had a 90% chance of winning while election guru Larry Sabato estimated that Clinton would win 322 electoral votes.

These failures of have put pollsters, polling experts, and election analysts on the defensive. Appearing on a podcast the day after the 2020 election, FiveThirtyEight founder Nate Silver was asked to address the online “rage.” Silver rose to prominence on the left-wing blog The Daily Kos and worked with the Obama campaign, so when the polls turned out to be inaccurate, liberals felt betrayed. “If they’re coming after FiveThirtyEight, then the answer is f–k you, we did a good job!” Silver retorted.

Although polls are often invoked because they give political commentary an unearned sense of mathematical certainty, polling is both an art and a science. For instance, a typical poll involves getting responses from 1,000 registered voters, but randomly selecting 1,000 people is unlikely to produce a sample representative of millions of American voters. Once the responses are in, pollsters will then balance the pool – “weighting” is the term of art for emphasizing certain demographics and characteristics among poll respondents, such as education, partisan affiliation, or race – to produce a sample more representative of the electorate. However, weighting polls is an inexact science and involves pollsters making subjective judgments – particularly if they don’t have a clear idea of the size of particular demographic group, if the demographic data being used as a baseline is inaccurate, or if they are making accurate assumptions about voter turnout.

Silver has been considered a leading polling authority since 2008 when he became a vocal proponent of using state polling averages to analyze presidential races. That year he correctly predicted the presidential winner in 49 states. In 2012, Silver got all 50 right. These feats created a surge of new interest in election data, and resulted in vastly more media attention given to polling.

Since 2012, however, polling hasn’t fared well at all. The midterm polls were badly off in 2014, with multiple Senate surveys being off by 10 points or more.

The polling errors of the last four election cycles have one thing in common: overstating support for Democratic candidates. So far, it doesn’t look like most pollsters have any idea how to fix the problem. Reached for comment on the firm’s 2020 performance, Quinnipiac sent a sober statement in declining. “I’m not able to make even preliminary hypotheses about what exactly the issues are,” Quinnipiac’s Doug Schwartz told RealClearInvestigations. “After the 2016 election, it took 6 months for the American Association of Public Opinion Research to release their findings about polling errors; I would expect a full evaluation of 2020 to take at least as long, though we might have some idea of the situation before then.”

Autopsies of the 2016 election by individual pollsters and the AAPOR zero in on the failure to accurately account for Donald Trump’s appeal among blue-collar voters. To address this shortcoming,  most state polls in 2020 were weighted by education levels to make sure voters without college degrees were not again under-represented.

This reweighting doesn’t appear to have fixed the problem. Some have suggested 2020 has presented unique issues for pollsters because the coronavirus pandemic led to a huge increase in mail-in ballots.

“In the end, the polling error in states was virtually identical to the miss from 2016,” Cohn observed in a recent post-election report.

Some Republican pollsters made robust efforts to identify Trump voters, including Robert Cahaly of the Trafalgar Group, who proved to be the most accurate swing-state pollster in 2016. Cahaly is proprietary about his methods, but he said a central problem was the reticence of Trump voters to express their support for the President – the so-called “shy Trump voters” – along with general suspicion of the media and, by implication, pollsters working for media companies.

“Trump people distrust the ‘deep state’ or whoever they think might be asking,” Cahaly says. “They might not even give their real opinion on a tax. They might not give their real opinion on an automated call. They might not give their opinion on an email survey that they don’t know – ‘Who’s asking?’ We got that [question] all the time,” he says.

Cahaly says he altered his polling techniques for 2020 to try and account for that growing distrust. He remains guarded about his polling methods, but it’s known that Trafalgar does a lot of polling by text message and that its polls only involve a few direct and easy to answer questions. The goal is to be as unintrusive as possible and get quick, unguarded responses.

Cahaly finds the continued use of phone polling misguided. The Pew Research Center confirms response rates to telephone surveys have declined from 36% in 1997 to just 6% in 2018. “These guys are VCR pollsters in a Netflix age,” he says.

For all that, Trafalgar’s 2020 results weren’t as accurate as in 2016, and the firm had some notable misses. While it distinguished itself from most other pollsters by correctly predicting Trump victories Florida and North Carolina, Trafalgar’s final poll in Georgia had Trump winning the state by five points; he’s lost there by less than one. Trafalgar said Trump was up two points in Michigan and two points in Pennsylvania; he’s behind in both.

Even though Trafalgar ended up with a better track record than most polling organizations, outside observers remain skeptical. Henry Olsen, a Washington Post columnist and polling expert, says that instead of underweighting Trump voters, “Cahaly overweights those [voters] so that he was producing these estimates that clearly also were not correct.” Nonetheless, Olsen concedes: “Cahaly’s basic insight is right. There are people who just are disinclined to answer polls. They are disproportionately Trump backers. … I think Cahaly had the wrong assumption about weighting or turnout, but the vast majority of more traditional pollsters just missed the boat entirely.”

Aside from Trafalagar, a handful of politically conservative pollsters in 2020, such as Rasmussen, Susquehanna Polling and Research, and Insider Advantage, all regularly produced polls that were noticeably more favorable to Trump and as a result, they often produced more accurate results this year.

So far, the polling establishment has been reluctant to acknowledge the success of certain partisan pollsters – Trafalgar gets a C- in FiveThirtyEight’s grading system despite outperforming many more established and better-graded polling firms in the last few elections. In fairness to FiveThirtyEight, it tries to evaluate pollster methodology and Trafalgar doesn’t reveal much about how its polls are taken.

Republican pollsters, meanwhile, are using the recent failures to press their criticisms of the polling establishment. Jim Lee of Susquehanna echoes Cahaly’s observations about growing distrust, noting that pollsters are most often affiliated with either the media outlets or universities – two institutions that are viewed as hostile to conservatives and intensely disliked by Trump voters.

Lee has called on AAPOR, the industry trade group, to hold pollsters accountable for poor performance. To that end, he has produced a detailed memo making recommendations. Many of these suggestions call for greater transparency, such as requiring pollsters to disclose donors, business relationships, and partisan affiliations. But Lee also wants AAPOR to institute meaningful professional consequences for pollsters who are wrong, and suggests instituting the following paragraph in the trade organization’s “Code of Professional Ethics and Practices”:

Public, private and/or academic university polling firms that publicly release results to election-related polls within 14 days of an impending election, and where hypothetical “ballot test” results are released, and where these results are said to differ wildly from actual election outcomes post-Election Day, and thus are vulnerable to criticisms of contributing to voter suppression, shall face sanctions, rebuke and/or reprimands by AAPOR and as a result, should be called on to issue public apologies with detailed explanations of their survey methodologies and reasons for [their] erroneous election outcomes. Furthermore, AAPOR should strongly encourage these academic institutions of higher education who are recipients of federal and/or state grants or other taxpayer financed earmarks for the purposes of conducting “research” to refund in full to governmental jurisdictions and/or state/federal agencies these expenditures.

While Lee doesn’t specify what sanctions or rebukes the organization should institute, he is concerned that the media’s obsession with reporting on polls, particularly when those polls have consistently and inaccurately favored the Democratic Party, is affecting election results. “I think the study that needs to be done is with registered voters that chose not to vote on Nov. 3 and to ask them why,” Lee says. “And if you have even 10% say, as you know, ‘I saw the polling and my guy was winning, so I stayed home,’ boom there it is.”

Not everyone agrees the consequences of bad polling are so dire. “The idea that inaccurate polling leads to voter suppression is laughable,” Olsen says. “Marginally attached voters are well known to consume low amounts of political news. The idea they might be interested in voting but discouraged because of an obscure poll barely publicized among the general public is absurd.”

However, Olsen isn’t dismissive of the idea that there’s a need to hold pollsters accountable. “Nothing wrong with the AAPOR adopting polling guidelines that members have to adhere to,” he says. “The UK polling board does that and revised their guidelines after well-publicized polling misses in the last decade.” The AAPOR won’t comment on Lee’s memo, but a spokesperson for the organization noted “AAPOR is in the midst of doing its every-five-year review of its Code of Professional Ethics and Practices and will review and evaluate all suggestions when the open comment period ends in January 2020.”

While the contention that bad polling affects election outcomes is contestable, there’s really no doubt polls are distorting the electoral process in a way that’s bad for Democrats and Republicans. Democrats raised nearly $200 million for Senate races in Maine and South Carolina, in no small part because polls indicated those races were close. The Democratic candidates in those races lost by nine and 10 points, respectively. Democratic leaders told Politico after the election that poor performance in November was because the party “underestimated Donald Trump’s popularity, relied too much on polls, and failed to heed the warnings of its most vulnerable members.”

More broadly, the failures of 2020 have rekindled periodic questions about whether journalists and political professionals alike rely far too much on polling instead of fuller ways of measuring Americans’ moods and political aspirations. Just a few weeks after widespread failure in the polling industry, reporting on polls continues to shape media coverage and set the national political agenda as if nothing happened. And that might be the biggest problem of all, says Cahaly. “If you can’t predict something as simple as an election race, do you think these same polls are giving people the right answers on how people feel about Covid?”

[Editor’s note: This story originally was published by Real Clear Investigations.]

Leave a Comment