• Text smaller
  • Text bigger

Journalists routinely cite public opinion polls in reporting political
campaigns and other news stories. They also routinely cite the poll’s “margin
of error” which lends an air of precision to the data. However, few consumers
of polling data really understand what “margin of error” means and how it
should affect their interpretation of the data.

Polling theory tells us if you ran the exact same survey under the exact
same circumstances, you would get results within the margin of sampling error
19 times out of 20. While that’s a pretty high level of comfort, it also means
one poll out of 20 will be outside the margin of sampling error. Some people
in the business call these rogue polls. At Rasmussen Research, we call them
klunkers.

What’s really scary about this for a pollster is it’s generally hard to
know when you’ve had a klunker.

For example, suppose you run an election poll one week and find the two
candidates are pretty close. Then, a week later, one of the candidates has a
6-point lead. There are several possibilities. One is that something really
changed in the race during that week. Another is one of the two polls was a
klunker. A third, less likely, is the race is actually right in the middle and
your two polls captured the extreme ends of the margin of error. The only way
to know for sure is to conduct a third survey (and maybe even a fourth for
comfort).

For the Portrait of America Tracking Polls, we’ve conducted over 100
nightly surveys in the race for president. As a result, we’ve probably have
had five or more klunkers. That’s one of the reasons we report results as part
of a three-day rolling average. If we have a klunker, the impact is muted by
the fact that we’re averaging in two other surveys.

Also, by using a large sample of 2,250 likely voters, we have a smaller
margin of sampling error than other surveys (+/- 2 percentage points). This
means if we are slightly outside the margin of sampling error, the apparent
impact is less dramatic than it would be with a 4-point margin of sampling
error.

So, when you look at a series of polls that show one result and find one
poll is out of synch, you should generally assume the majority of polls are
right.

Last week, the Portrait of America Presidential Tracking Poll showed George
W. Bush clinging to a 2-point lead in the popular vote for president. A
Newsweek poll showed Vice President Al Gore up by 8. Other polls were reported
showing the race somewhere in between.

Republicans came to Portrait of America commenting on the fairness of our
poll while trashing Newsweek. Democrats came to the Portrait of America site
offering nasty comments and telling us that Newsweek proved just how far off
base we were.

In reality, virtually all polls of the presidential race conducted last
week showed essentially the same result. Our tracking polls showed each and
every day the race was within the margin of sampling error. One day, it was a
pure tie. Eight other polls released last week showed the race within the
margin of sampling error and the most common result reported was a tie.

So, looking at all available data, it’s safe to conclude the race for
president was a toss-up and that the Newsweek poll was just an aberration.

Since then, both the Portrait of America and Gallup Tracking polls have
shown some movement in Gore’s direction. This might be a trend, and it might
just be statistical noise. We won’t know for sure until we conduct a few more
surveys and see where things settle out.

I’m not suggesting the margin of error is all you have to worry about when
evaluating polls. Next week, we’ll talk about the difference between
registered voters and likely voters and how the differences can lead polling
firms to report different results.

  • Text smaller
  • Text bigger
Note: Read our discussion guidelines before commenting.