Public opinion polls can be newsworthy and useful, but they are not predictions. (Photo by Tero Vesalainen/Getty Images)
Polls are not predictions.
Say it with me: Polls are not predictions.
I’m sure you’ve heard that before, but whenever there’s a significant poll with a surprising result, I feel like it needs to be said again. If we say it enough, maybe RAYGUN will print it on a T-shirt.
The Des Moines Register/Mediacom Iowa Poll published over the weekend, which showed U.S. Sen. Chuck Grassley with a mere 3-point lead over Democrat Mike Franken, was both significant and surprising. Grassley led by 8 percentage points in the Register’s previous poll in July, and other polls have shown even larger leads for the Republican incumbent.
The results will bring out all those pundits and prognosticators who like to make predictions based on polling, because that’s what they do. It will also bring out all those skeptics and conspiracy theorists who will claim, without any evidence, that the poll is somehow wrong or flawed.
And if the results are different than this poll shows on Election Day, the doubters will see that as proof the poll was garbage. Or even worse, some may go the other way and claim the polls were right and the election results were wrong. And that’s just dangerous.
I don’t have a vested interest in this poll. I did, however, work on the Iowa Poll with pollster J. Ann Selzer for most of the 16 years I was at The Des Moines Register. I believe there is ample reason Selzer is considered one of the best pollsters in the country. I also acknowledge there are challenges with polling that didn’t exist or were much less significant when I started my career in political reporting.
There are plenty of conclusions pundits and voters can draw about how and why Iowa’s Senate race, which most political experts have rated as safe for Republicans, suddenly appears highly competitive. It’s especially intriguing that Grassley is in this position, given that Republican Gov. Kim Reynolds has a 17-point lead over her Democratic challenger, Deidre Dejear, in the same poll, the Register reported Sunday.
But this is also an opportunity to review what polls are, and what they are not.
Election polls are not predictions. It’s more accurate to say they are history. That’s because they can only measure how people say they will vote on the day they are surveyed. Voters are asked: “If the election were held today …” But this poll was not taken on Election Day. It was taken Oct. 9-12, which was nearly four weeks before Nov. 8 and seven to 10 days before people can start voting early.
At that point, the poll shows, 46% of likely voters said they would vote for Grassley and 43% said they would vote for Franken. That’s a very close result within the poll’s margin of sampling error, of plus or minus 3.9 percentage points. We’ll get to that in a minute.
Even if the poll was 100% accurate, I do not expect the same results on Election Day, because people can change their minds and their behavior. The result might be better for Franken, or it might be better for Grassley. Perhaps some of the 3% of voters who said they weren’t sure how they would vote would make up their minds, or perhaps some of them won’t vote at all. Maybe some of the 4% who said they would vote for someone else will decide to pick one of the major-party candidates. Maybe something will happen that brings out people who were not polled because they said they weren’t likely to vote.
None of that would mean the polling was wrong, or inaccurate. More likely, it would mean the final weeks of the campaign made a difference.
I will add, it’s possible the poll itself will make a difference. Certainly, it will be seen as a motivator for the Franken campaign. More people who are inclined to support him may decide to vote. Some may decide it’s worth their time to volunteer for the campaign, or worth the investment to donate some money.
But people supporting Grassley may also by energized by the results and decide to work harder for his campaign because they may perceive a risk of losing that they didn’t imagine before. It’s difficult to say whether one side will be more effective than the other at using the poll results to their advantage.
Not all polls are accurate
So if we shouldn’t judge the accuracy of a poll compared to the election results, how do we know a poll is accurate?
First of all, no poll is 100% accurate. Even the best polls acknowledge a margin of error (and if no margin of error is reported, you should run the other way). There are challenges beyond that standard that all pollsters face. And some polls are better than others – sometimes, much better — because of how they are conducted.
We can look at the pollster ratings that I pointed out above. We can ask whether the pollster is a member of the American Association of Public Opinion Research and adheres to its standards and ethics. And we can look at a poll’s published methodology for transparency. Polls should publish their sample size and demographics, actual questions, margin of sampling error and methods for contacting voters, for example.
A few quick points:
Margins of error: The margin of error does not change a poll’s results, despite what campaigns and those trying to spin the outcome may want you to think. In the case of this Register poll, you may hear people add or subtract the margin of error from the topline results to make the race look even closer, or less close, depending which side they’re on.
But Selzer often points out that the reported results (46% for Grassley and 43% for Franken, in this case) are the most likely. Is it possible that Franken may actually have 46.9% of the vote (his result plus 3.9 percentage points) or that Grassley may actually have 42.1% of the vote (his result minus 3.9 points)? Or, is it possible Grassley has a bigger lead than 3 points based on the margin of error? Yes, there is a non-zero possibility that may be the case. But that result is far less likely than the topline numbers as reported.
Methodology matters: A poll may be more or less accurate based on how it’s conducted. Quick-hit online surveys, for example, are not scientific because they only sample those who happen to see the survey and choose to click on it.
But it’s more complicated than that. Selzer has often talked about why she builds no assumptions into her data, such as which voters are most likely to turn out. People often talk about a pollster’s “secret sauce”: If a Selzer poll was a meal, it would be a rare, juicy steak with nothing on it but salt and pepper. There’s no sauce, in terms of some polling mumbo-jumbo that steers the results toward an expected outcome.
Here’s what she said about her methods during a 2020 “Iowa Press” interview: “My approach to polling is to think about it as polling forward, that is I don’t want to get in the way of my data revealing to me what is happening. So I don’t want to make assumptions, I don’t want to make any judgments about what is the right outcome or the wrong outcome. There are a lot of other polling outfits that decide how they are going to weight their data by looking backwards to see, ‘what was I expecting’ and how would they have arrived at that except to look at past elections and sort of embed those assumptions into their data. I call that polling backward.”
She weights data – a legitimate and accepted process. She collects data on all Iowa adults, age 18 and older. She weights the sample based on the demographics of that population, not based on who she thinks will actually vote. Then she eliminates those who do NOT say they’ll definitely vote. Many other pollsters will tailor their samples or weight their data based on their own perceptions of what the electorate will look like. These assumptions are usually based on history – and Selzer points out those trends are only accurate until they change.
There are other opportunities for inaccuracy in polling based on how the questions are phrased, in what order they are asked, etc. The Register, with CNN, canceled its final poll before the 2020 Iowa Caucuses because a candidate’s name was inadvertently omitted by the call center in at least one interview. I thought it was the right decision, albeit a painful one, that underscored the poll’s commitment to accuracy.
Voters may fib: Polling was more difficult than ever in 2020, and the same challenges exist today. There are concerns about accuracy that even the best pollsters have not yet solved. Polls rely on being able to reach likely voters who will answer questions. Many voters won’t answer their phones and if they do, they may refuse to respond to a poll. Some poll respondents don’t tell the truth about how they plan to vote. Technology must continue to evolve to reach more voters, but that won’t help if voters lie.
So why poll? Even with all its limitations, I still believe election polling is a newsworthy and useful tool. A poll can give us insight into why voters are making their decisions – information we wouldn’t have if we just waited for results on Election Day. And for all the scorn often heaped on horserace reporting, people want to know who’s ahead in a campaign – as long as polling doesn’t skew reporting.
The challenge is to keep polling in perspective. Former Gov. Terry Branstad, the longest-serving governor in the history of the United States, always used to say the only poll that counts is the one taken on Election Day. That doesn’t mean his campaigns never paid attention to polls – of course they did. But it is a sage reminder that (all together now): Polls are not predictions.
Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our web site. Please see our republishing guidelines for use of photos and graphics.