How has opinion polling changed?
Editor’s note: Doug Pruden and Terry G. Vavra are partners at Customer Experience Partners. This is an edited version of an article that originally appeared under the title “The Challenges of Electoral Polling.”
Today, less than a week before one of the most unusual presidential elections we'll likely ever have, we feel compelled to offer a few opinions on why/how pollsters may fail in their predictions of some electoral contests. While we are writing this in advance and are not in the business of predicting political polling outcomes, we are confident that in many races the pollsters will have been less than completely accurate. Yes, some of the misses will be the result of biased predictions – either biased interpretations or skewing of the survey process. But, for several reasons, even research conducted in a highly professional manner is becoming increasingly challenging.
Though you may feel that political polling is somewhat different from the research surveys we use in marketing to gauge the potential of new product concepts or the appeal of competitive positioning strategies, both forms of opinion-monitoring are subject to the same evolving difficulties.
The challenges opinion polling faces
Here’s a list of key reasons pre-election polling is getting more difficult and less trustworthy.
Willingness to participate in surveys
Response rates have been on a dive for the last 30 years. In the 70s and 80s it was still possible, with call backs, to achieve a 30%-50% response rate. Today rates are reportedly as low as 5%! This raises the issue of non-response bias. If one doesn’t attempt to sample non-respondents, there’s the possibility that they’re systematically different than respondents and their unique position never gets acknowledged in a survey’s results.
Accessibility to respondents
Gaining access to potential respondents has been a problem of increasing difficulty. Over the last fifty years technology, concerns for personal safety and cost considerations have resulted in major changes in methods for contacting potential respondents. There has been a progression in interviewing from personal interviews conducted in homes, outbound telephone interviews, inbound telephone interviews, online opinion polls and most recently river sampling, an online sampling method that recruits respondents by inviting them to a survey while they are engaged in some other online activity. With each step, pollsters have lost some degree of control and have sacrificed some degree of representation ergo predictive accuracy.
Displacement of the landline telephone
Random digit dialing was once able to create a credibly representative sample of a population. However, as landlines declined from being the only telephone device to one favored by fewer than 30% of the population, access to a representative sample of the population has become increasingly difficult. That’s because Federal Law (TCPA), enforced by the FCC and FTC, prohibits robocalling to cell phones which have not opted in for such calls.
Caller ID
Among those households who have maintained landlines, caller ID and a growing awareness of phone scammers have made it difficult to reach a desired sample. Many people simply won't pick up the phone if the call is coming from an unknown number.
Difficulty in modeling the voting population
To determine how representative a responding sample is, one requires knowledge of the various factions in the population that might influence the issue in question. New groups, like Women for Trump, newly registered voters or any unrecognized group increase the complexities of accurately profiling a population within a sample.
Sample weighting problems
When a population segment is underrepresented in a sample, the opinions of that segment can be weighted to bring its proportion of the sample closer to its presence in the population. However, if the weighted attitudes aren’t representative of the larger segment, then the weighting can severely affect the survey results.
Failure to recognize social desirability bias
Sometimes a measured issue is clouded with social undesirability. For example, it’s been suggested that many American women may have felt uncomfortable admitting to their decision to vote for candidates linked to or endorsed by Trump, though ultimately they may vote for them. Consequently, their intentions may misrepresent their voting behavior. In marketing research, we also face demand biases – a respondent’s desire to tell the interviewer what they believe the interviewer wants to hear. This often leads to acceptance of lackluster product concepts.
Assuming respondents have decided
Opinion polling is based on the simple premise that a person knows how they will behave in a future situation (e.g., how they’ll actually vote). Not all of us have such foresight. It's important to give respondents the opportunity to respond, “I don’t know” or “I haven’t made up my mind.” Neutrality or indecision must be allowed in survey responses if the survey is to be accurate.
Highly polarized positions
Just as we are seeing more flags and lawn signs than in past elections and witnessing a huge jump in early voting, it could be that those who feel emotionally strongest about this year's contests are more willing to participate in polls than the still undecided who will ultimately impact outcomes in swing states.
The election's results
How correct the pollsters will be and how much each or any of these problems contaminated the polling surrounding the presidential race will likely be the subject of debates in the days ahead. Attempting to control each of them is what keeps many smart people interested in the field of marketing research. After all, no one said predicting human behavior would be easy.