As we go to press with this issue, the various state caucuses and primaries are being held around the country. With election-year madness heating up, I read with interest a study by Harris Interactive about (mis)perceptions surrounding pollsters’ oft-used phrase “margin of error.”
Already this year, the political polling process has taken it on the chin, thanks to Hillary Clinton’s first-place showing in New Hampshire . Following Barack Obama’s victory in Iowa , several polls leading up to the New Hampshire contest incorrectly had Clinton finishing a distant second to Obama.
Politicians have always had odd relationships with polls, especially during election years. If the numbers are in a candidate’s favor, the poll’s methodology is viewed as rock-solid. If not, then the flaws of the survey process are dredged up by the candidate and his or her supporters.
While the media historically haven’t helped clear things up very much - eschewing even brief mentions of the possible effects of errors in their stories on poll results - I have noticed that many outlets have become more responsible in how they report poll data, which, as an industry observer gives me a small bit of hope. The political polling process certainly isn’t perfect, but for many in the general public, election and other opinion polls are the main form of research they are aware of, so anything that can be done to enhance the (under)standing of polls among the populace is worthwhile.
Misperceptions
Clearly, as the Harris Poll found, there are a lot of misperceptions about error out there. After surveying 1,052 U.S. adults by phone from October 16-23 last year, Harris reported the following:
- Fifty-two percent of all adults believe, wrongly, that statements about “the margin of error being plus or minus 3 percent” mean that “all of the results of the survey are accurate to within a maximum of 3 percent given all types of error.”
- A 66 percent majority of adults believes, wrongly, that the phrase “margin of error” includes calculation of errors caused by “how the questions are worded.”
- Large minorities believe, wrongly, that the calculation of the margin of error includes errors in developing a representative base or weighting errors (45 percent), mistakes made by interviewers (45 percent) and errors because of where the questions are placed in the survey (40 percent).
- Only 12 percent of the public agrees that the words “margin of error” should only address one specific source of error, sampling error - as they almost always do.
- A 56 percent majority believes that statements about margin of error do not make it clear that this calculation excludes all sources of error except for sampling error for a pure random sample.
Purely theoretical
As the Harris press release notes, margin of error is “actually a purely theoretical calculation of what the likely maximum error (at a 95 percent confidence level) would be if the survey had used a pure probability sample with a response rate of 100 percent and there were no other possible sources of error. In the real world of polling there are several other sources of error that may sometimes be larger than this theoretical calculation of sampling error, and there is no good way to calculate them.”
The release outlines other possible sources of error in a poll:
- Non-response errors - Pollsters often do not complete interviews with most of the people they intend to survey because they are not available or refuse to be interviewed.
- Errors due to question wording or question order - The answers to questions are sometimes influenced by such things as how the questions are posed, what questions were asked earlier in the survey or which responses are presented to the respondent, among other things.
- Errors due to interviewers. Interviewers sometimes influence, often unconsciously, the answers given by the people they survey (e.g., social desirability, acquiescence bias, researcher expectancy effects, etc.).
- Weighting errors - Most polls are weighted statistically to compensate for demographic and other biases in the survey sample; this is an imperfect process. Weighting the data can cause errors in the results.
Use is controversial
Harris Interactive drew several conclusions from its research:
- The use of the phrase “margin of error” is controversial because it is often used when reporting telephone polls even though it is not possible to calculate a real margin of error.
- Pollsters must do a better job of explaining all the possible sources of error in their polls, not just a theoretical sampling error, which does not take into account the impact of other, potentially substantial, sources of error.
- The accuracy of opinion polls should be judged empirically by the accuracy and reliability of their findings, not on a theoretical basis when there is no way to calculate a real margin of error.
- Pre-election polls should continue to be trusted only so long as their final forecasts are reasonably accurate, not because they are theoretically “scientific” (since there is no means to establish that they are).
- The words “margin of error” should probably not be used at all in conjunction with polling results.
At the end of the survey, respondents were asked if pollsters should continue using the phrase “margin of error” in light of the impossibility of calculating most possible sources of error. Surprisingly, a 52 percent to 40 percent majority thinks that they should. (For its part, the Harris Poll does not use the phrase.)