- Free newsletter
- The Latest
- Topics
-
About
Polling season
As the US presidential campaign heats up, polls become big news. Pollster Christopher Blunt gives some tips on how to interpret them in this interview.
MercatorNet: Professional pollsters were amazed when Hillary Clinton beat Barack Obama in the New Hampshire primary by 39 percent to 37 percent. Several respected polling organisations had predicted that Obama would score a double-digit victory. What's your theory?
Christoher Blunt: I must say, first off, that I did not conduct any polls in New Hampshire—and am therefore not in a position to examine their wording or sample composition. Most analyses that I have seen blame the outcome on a confluence of various problems.
My own personal hypothesis is that the New Hampshire results are a classic example of the "observer effect" (sometimes colloquially referred to as "Heisenberg’s Uncertainty Principle"), where the act of observing influences or changes the phenomenon being observed. Registered Independents are allowed to choose which party’s primary they will participate in, and this group was heavily courted by both Democrat Barack Obama and Republican John McCain.
With pre-election polls showing a wide lead for Obama, I suspect that many Independents concluded that he didn’t need their votes—and therefore voted for McCain. This would also help explain why McCain’s election night total was approximately five or six percentage points higher than the final polls were showing.
MercatorNet: After decades of polling, how is it that pollsters still get things wrong? Is it more an art than a science?
Blunt: There is a science to drawing a sample that is representative of the population, which allows us to extrapolate from that sample to the electorate as a whole. However, a survey is not a census; there is a margin of error associated with every poll. A larger sample can reduce this margin of error, but nothing can eliminate it. Professional pollsters are usually very accurate in gauging public opinion, as we have developed excellent standards for sampling and balanced question wording.
One of the big challenges we’re facing going forward, however, is the difficulty of interviewing people who have "cut the cord" and are using only cellular phones. Because cell phones usually have charges for incoming calls, pollsters do not include these numbers in their samples. As increasing numbers of people are leaving their landlines behind, we’re finding it increasingly difficult to get younger voters in our samples. We have ways of compensating for it, but this is going to be a serious issue in the future.
MercatorNet: With the race for the presidential nomination so open in both parties, we are being inundated with polls. Isn't there a danger of polls swaying the voters rather than revealing their preferences?
Blunt: As I said about the New Hampshire results, pre-election polls can certainly produce an "observer effect". What’s been most remarkable to me, however, is how little each of the state electorates seems to have been influenced by the results in other states this year. As of this interview, we’ve had six states (Iowa, Wyoming, New Hampshire, Michigan, Nevada, and South Carolina) vote on the Republican side. No candidate has been able to use a victory in one of these states to influence the outcome in the next state. Rather, voters in these separate jurisdictions seem to be making independent judgments about the candidates, based on the campaigns being run in each jurisdiction.
MercatorNet: Are polls always objective? Is it possible to skew responses to produce a set of results? Are there any tell-tale signs that this might have happened?
Blunt: There are many ways that poll results can be skewed. Sometimes the skewed results are inadvertent, but they can also be deliberately generated by an organization intent on producing evidence that the public holds a certain opinion about a given issue. Samples can be biased, through disproportionately weighting up the proportion of certain groups and weighting down certain other groups.
Much more common, though, are problems with the framing and wording of questions, and the order in which those questions are presented. If the election trial heat is asked after testing a battery of information items unfavorable to one of the candidates, the head-to-head result can be very different than if it had been asked before that battery of items. (In fact, campaign pollsters often ask the trial heat both early and late in the interview, to see which information items are most closely associated with opinion change.)
Sometimes, the only way to know the question order is to examine the entire questionnaire; unfortunately, as some questions are proprietary and for the client’s internal use only, pollsters don’t often release the entire set of results. Where everyone can be a better consumer of polling data is in examining closely the question wording itself. A good question gives the respondent two or more real choices, and does not attempt to "lead" the respondent toward a particular outcome.
Particularly if the issue in question is relatively obscure, or one about which the public is not yet widely informed, a good pollster will offer "or do you not have an opinion about this" as an option. Also, a good question does not introduce unnecessary information or argumentation. These kinds of biases can be very subtle; the key is to ask yourself why the pollster selected a particular formulation and not another.
MercatorNet: How about the candidates themselves and their political advisers? In your experience are they able to interpret the polls so that they respond rationally to public opinion?
Blunt: Most candidates have firmly established positions on issues; the politician with his finger to the wind, gauging what the public thinks before taking a position, is largely a myth. I’ve worked with many candidates, and have never met one who needed a poll to tell him what to believe. Campaigns tend use opinion polling to understand what the electorate thinks are (1) the highest issue priorities; (2) the candidate’s greatest accomplishments; (3) the opponent’s greatest vulnerabilities; and the most effective language to use in communicating all of these. The bottom line is that polling helps a campaign make better strategic use of its limited resources (time and money).
MercatorNet: You did a major study on abortion attitudes in Missouri last year which captured nation-wide attention. Is it difficult to take polls on controversial ethical issues?
Blunt: The key in polling these kinds of issues is to use question wordings that are as neutral and unbiased as possible. In the Missouri study, my co-author and I aggregated roughly 30,000 survey interviews that had been conducted over a 15-year period, and examined the trends to abortion attitudes. All the original interviews had been conducted by the same pollster, and we’d asked the abortion question exactly the same way each time: "On the debate over abortion policy, do you consider yourself to be pro-life, pro-choice, or somewhere in between?"
To guard against order effects, we took the additional step of rotating the terms "pro-life" and "pro-choice;" half the respondents heard one term first, and the other half heard the other term first. Note that we also made a deliberate point of referring to each side in the abortion debate as it refers to itself: "pro-choice," not "pro-abortion," and "pro-life," not "anti-abortion". The key was to help respondents on each side of the debate be comfortable identifying themselves, and to give an "out" ("somewhere in between") to people who hadn’t really thought much about the issue.
MercatorNet: People often like to score points in policy debates by invoking a poll that shows that a majority of American approve of, say, stem cell research or more restrictions on immigration. But opponents can often throw back contradictory figures. Can polls on social mores be trusted?
Blunt: Polls on nearly any subject can be trusted if they are conducted fairly, but polls on contentious social mores perhaps need to be examined particularly closely in this regard. When a major player in such a debate (for example, a biotech company that would profit from embryonic stem cell research, or a business group that profits from cheap immigrant labor) releases a poll purporting to show that that public agrees with their side of the debate, readers should take a very hard look at the question wording. Were arguments or messages given for one side but not the other? Was each side in the debate fully and fairly represented, or was one side more of a straw man than the other? Were respondents simply asked if they agree with favorable statements, or were they given the choice of two sides on the issue?
MercatorNet: What are the three most important questions voters should ask to assess whether a poll is meaningful?
Blunt: First, who is the sponsor of the poll and what is that organization’s motive? Second, were the questions worded and presented in a fair and balanced manner? Finally, how consistent are the results with other polls on the same subject?
MercatorNet: What are the ethical challenges that a professional pollster faces in his or her work?
Blunt: I am a member of the American Association of Political Consultants, and also of the American Association for Public Opinion Research; as such, I adhere to the well-developed codes of ethics that these organizations have established. Most of these ethical codes cover such issues as honestly conducting and reporting a poll’s results, disclosing the poll’s sponsor, defining the population under study, detailing the exact question wording, and so forth.
In addition, like most pollsters, I have personal standards regarding the clients I will or will not take on. For example, I have turned down work for pro-abortion Republican candidates with pro-life primary opponents. I will also not work with an organization which wants a poll to help it promote or profit from abortion, contraceptive services, homosexual "marriage," embryonic stem cell research, and so forth. Finally, like most pollsters, I wouldn’t work for a client (even one I was sympathetic to) who insisted on fielding a "propaganda poll" with blatantly biased question wording.
Christopher Blunt operates Overbrook Research, a public opinion consulting practice, in Michigan. He has been designing, conducting, and analyzing quantitative and qualitative research since 1991. His analysis has helped shape Republican campaign strategies nationally and in many individual states. In the most recent election cycle, his analysis played an integral role in the RNC’s microtargeting efforts in dozens of campaigns. Dr. Blunt was a study director and analyst with Market Strategies, Inc. for a dozen years, before founding Overbrook Research in 2003. He holds a PhD in political science from UCLA.
Join Mercator today for free and get our latest news and analysis
Buck internet censorship and get the news you may not get anywhere else, delivered right to your inbox. It's free and your info is safe with us, we will never share or sell your personal data.
Have your say!
Join Mercator and post your comments.