Editor’s note: Stephen J. Hellebusch is president of Hellebusch Research & Consulting Inc., Cincinnati.
An article from the June 16, 2008, issue of BusinessWeek (“Online Polls: How Good Are They?”) reminded us about the importance of making sure that those we survey are representative of the population we are trying to investigate.
It reported: “Online polling offers some clear benefits. Because respondents can choose when to take the survey and how much time to devote to each question, they are more likely to provide thoughtful and candid answers. Companies can ask more sensitive questions (how often do you bathe?) because respondents aren’t being questioned by a human being …
“But critics point to a central problem with many online surveys: the pools of respondents, though massive, rarely represent the larger population. That … is because the respondents aren’t selected randomly, violating a core requirement of probability-based research.”
Selecting a truly representative sample is a difficult task, and one easy to overlook. Additionally, there is a learning curve, as much of the recent research concerning quality in online interviewing indicates.
Oddly enough, it is actually rare that we get to see the impact of sample quality in marketing research, primarily because it is not an objective of any of the studies we conduct. A little while back, however, I had an experience that focused and sharpened the importance of this consideration for me. This short article shares that experience, with the names changed to protect the innocent (or the guilty).
The company had launched a new product, a fiber supplement in powder form which users mixed in water. Call it Powderine. It had a great flavor and mouth-feel as its key point of difference. To add some definition, let’s say Powderine’s flavor was chocolate.
About three years after a modestly successful launch, the brand group decided to add a well-researched alternative to the line. While people loved the chocolate, the new Powderine flavor was actually a non-flavor - it was neutral. The idea was that consumers would be able to mix the non-flavor with whatever liquid they desired, and “have it their way.” Since Powderine Neutral had the texture benefits of the original product, the target group was those using a competitor’s non-flavored version, which did not have the nice mouth-feel of Powderine.
The first sample
About five months after launch, the brand managers were concerned, because the customer information center (the toll-free line) was getting a lot of calls about the new line extension. To explore for any issues, we surveyed those who called in about it, most of whom had called to complain.
While the sample size was small (66), we learned that the users had a very unfavorable opinion of Powderine Neutral. About a fifth rated it “excellent” or “very good.” Relatively few (47 percent) indicated any intent to buy it again (“definitely” or “probably” would buy). When asked “likes” and “dislikes,” the number of comments categorized as dislikes dominated.
And then we had a very key learning. Most (80 percent) of those calling had come to the line extension from the original Powderine chocolate flavor, and many of them were mixing it in water - just like they did the original! It was designed to be tasteless in water, so their disappointment was not hard to understand.
The second sample
Those latter points made all of us wonder if the negative reaction was a result of the customer information center sample or due to a bad product. The brand group was especially anxious to learn if they had a product quality issue to address.
The category was very low-incidence, so it was hard to obtain a representative sample of Powderine Neutral users. Adding to the challenge was the fact that the product was a line extension, bearing the Powderine name, so even making sure the respondent was talking about the right product in the line was a challenge.
In the pre-Internet days, mailing short “card” questionnaires to huge household panels was a very effective way of collecting representative samples of low-incidence respondents. So, we used that approach. We could not be certain from the card that the respondent correctly identified the line extension, but we interviewed all 100 respondents available, and ended up with 37 who were certain to be referring to the Powderine Neutral. The sample size was uncomfortably low, but it was the best that could be obtained six months after launch in a low-incidence category. The questionnaire was the same as the one used earlier, for comparison purposes.
What a difference the sample made! While less than half of the customer information center sample indicated they would buy again (47 percent), nearly three-fourths (72 percent) of this sample said they would. Nearly 90 percent of this small group liked the mixability and flavor, relative to other similar products. Like/dislike comments were overwhelmingly favorable. Most of the respondents had come to Powderine Neutral from the competitive product, as intended, not from the brand’s chocolate flavor.
Stark difference
Although we may not wish it to be so, sample quality is still very important in conducting surveys. The stark difference in the results from these two very different samples with the same questionnaire makes that evident. The advent of online interviewing does not change the basics in conducting survey research - it just makes some things a lot easier. The sample has to represent the population you are projecting to, and the wise researcher will take steps and create quality-controls to help make certain that it does.
As a brief follow-up, Powderine Neutral eventually did fail in the marketplace. Presented as a flavorless alternative to a product famous for its great chocolate flavor, it never sufficiently caught on. But at least the brand managers had evidence that it was a hit with those it was aimed at, even though the marketing effort was unsuccessful in convincing enough of them to try it.