Editor's note: Natalia Elsner is director of research strategy at HSM Group, a Scottsdale, Ariz., research firm.
When McKinsey & Company released its employer study of health care benefits in June 2011, controversy quickly erupted because the findings were so at odds with the projections from the Congressional Budget Office, Urban Institute and RAND Corporation. While critics attacked the polling nature of the study and the fact that the questionnaire “educated” respondents, I kept thinking, “What if it’s the sample?”
Even after McKinsey released the methodology and crosstabs of the survey data, I still wondered if I could entirely trust an online panel to deliver 1,300 qualified respondents in the health care benefit space for a self-administered survey. The high incidence of “Don’t know” on some fairly basic questions indicated potential response-quality issues.
It was not the first time I wondered about the quality of the online sample for B2B research.
A lot has been written about online panels, with most authors focusing on issues pertinent to consumer studies and public opinion polling. B2B is in many ways a different animal: the size of the universe may or may not be measurable; the universe may be quite small (e.g., in managed care research); fielding costs are considerably higher; and study participants must be either in positions of influence (such as buying or influencing insurance coverage decisions) and/or in the position of knowledge (e.g., specific training or expertise enabling them to evaluate the merits of new products or technologies).
Lesson one: trust but verify
The first time I had misgivings about the quality of B2B panel respondents was when I was overseeing the fielding and performed data analysis of an online survey of dentists. I’ll refer to this panel company as Panel A. A programming glitch allowed panel members to enter the survey multiple times; this led to a discovery that some respondents changed their answers to the screening questions in an attempt to qualify for the survey.
Further, 3 percent were disqualified because they answered that they were NOT licensed to practice dentistry in the United States. So, how could they have been included in the online panel of U.S. dentists in the first place?
When we rebid this study later and evaluated other panel companies, I took a close look at the overall counts of dentists that the panel companies claimed and compared those to the estimates from the Bureau of Labor Statistics (BLS). Curiously, counts of dentists from Panel A approached 80 percent of the BLS-estimated universe of dentists, whereas a company we’ll call Panel B had only 15 percent of the universe – a far more reasonable subset.
We have seen even more egregious overstatements (see Table 1 for examples from Panel Company C) but BLS statistics are only of limited help: Most B2B respondents cannot be as neatly categorized as health care professionals.
Even though panel companies claim they validate their respondents (and with licensed health care professionals, member identity can be verified through external sources – see References section for link to a Frost and Sullivan report), it’s a good idea to screen respondents more rigorously than what a typical screening questionnaire will do. Our firm routinely includes knowledge questions and red herring-type questions in screening (see Table 2). We also evaluate open-ended responses and look at response consistency. Both can be fairly time-consuming and, as a result, we may remove more respondents than is customary: up to 5 to 10 percent of responses may be discarded at data processing.
Lesson two: incorporate respondent verification into survey design
My experience with Panel A prompted me to explore how other research companies approach panel use for B2B recruitment. I interviewed two research professionals. The conversations went like this:
Elsner: How do you know that the B2B respondents you get from an online panel are who they say they are?
Researcher: Well, the data that came back looked solid.
Elsner: How do you know it was solid?
Researcher: It was what we expected.
Elsner: Did your survey include any red herrings or other implausible options? Did you evaluate the quality of open-ended responses?
Researcher: No.
Elsner: Then can you really be sure the data was solid?
Researcher: We had no reason to doubt it.
When I posed the same inquiry to a few online forums of market research professionals, the response was dead silence.
Understandably, research is often undertaken to confirm a hypothesis or validate a decision that has already been made. But if a survey only offers a universe of plausible response options, then respondent data will only include plausible response selections, thereby increasing measurement error and failing to reveal respondents’ ignorance on the subject. A few well-placed red herrings or an occasional open-ended question, even in rigorous quantitative instruments, allow the analyst to look for damning inconsistencies which may suggest taking a closer look at the respondent.
Lesson three: expediency has a price
After the McKinsey study, I talked with another research professional whose company conducts large annual surveys of HR benefit managers. He admitted having doubts about the quality of the online sample. But, for his company, pressures around deadlines, survey length and quotas outweigh the methodological rationale for including knowledge questions and traps. We’ve been there, too. But just as research dollars may be wasted if we fail to scratch below the surface to expose the unexpected nuggets of insight, so too might the whole exercise be for naught if the sample fails to make the grade and strategic recommendations are based on data from uninformed respondents.
References
Unmasking the Respondent: how to ensure genuine physician participation in an online panel. Frost and Sullivan. Retrieved from http://www.frost.com/prod/servlet/cio/159368832