Editor’s note: Chris Szczepanski is assistant vice president at Weinman Schnee Morais Inc., a New York research firm.
As market researchers, our clients rely on us to deliver the highest-quality research possible. Crucial to this is the sample. Powerful analytic tools and skill cannot compensate for not interviewing the correct sample of respondents. As the saying goes, garbage in, garbage out.
Hard-to-reach, low-incidence targets are often the focus of market research. These targets are costly both in terms of dollars and their impact on the direction and scope of research, e.g., sequential monadic vs. monadic, fewer cells/respondents, lower confidence, etc. More costly targeted panels claim to, and often do, offer researchers an easier way than general-population sampling to reach desired targets, particularly low-incidence targets, by providing access to large, concentrated pools of respondents meeting specific profiles (e.g., demographics, psychographics, professions, etc.).
Targeted panels, accordingly, should yield higher incidence rates relative to a general-population recruit, resulting in quicker, more viable recruits. However, it’s apparent that this may not always be the case, particularly for panels targeting respondents employed in specific professions or industries.
Our previous endeavors to tap seemingly robust niche, profession-based panels have resulted in prolonged field times and increased costs due to lower incidence rates and the need to bring in additional panel providers. In turn, these unpleasant, unexpected effects have translated into stress caused by trying to manage costs and meet deadlines. In contrast, we have recently found that procuring the same sample targets via general-population recruiting as well as live-stream/on-the-fly recruiting has been quicker and less costly.
The inability of profession-based targeted panels to outperform general-population sampling of similar respondents raises questions about their quality, viability and, ultimately, their research and cost value. Why should we and, by extension, our clients pay more for a research catalyst that not only fails to speed up the process but may slow it down and lead to additional costs?
A core problem with profession-based panels may be that these panels are not as robust as advertised. The current U.S. national unemployment rate is nearly 10 percent, which doesn’t include people who’ve abandoned their employment search. Should we not expect the unemployment rate to have some effect on profession-based panels, particularly for industries and professions more acutely affected by the ongoing economic downturn, e.g., manufacturing, construction, etc.? This seems reasonable, despite the claims of most panel providers that they regularly re-verify their respondents’ profiles. But how often and, more importantly, how recently have they been verified or updated? Within the past month? The past three months? Is profile verification mandatory? Are panel members prohibited from participating in research until they have verified or updated their profile information?
As researchers, we need to know the quality of the panels we are considering. Such information should be provided up front by the panel provider. If not, we should question our suppliers about the quality of their panels. Simply believing that a panel’s quality must be high because it comes from Supplier X is not good enough; it is a careless lapse of our responsibility as professional researchers. Ensuring the quality of the sample is one of the most basic and important things a project manager can do to turn out high-quality research.
A well-designed screener should ensure that the desired target is captured, as should an appropriate panel of respondents from which to draw. Still, when using a specialty panel, particularly a profession-based panel, it would be good to know the quality of the panel. Telling us that a panel is comprised of X number of panelists, all of whom double opted-in, only tells us that X number of respondents who at some point in time qualified for this panel agreed twice to be a member of this panel. That’s great, but it does not provide any insight into its quality. Are 80 percent of panelists current, viable members? Are 60 percent? Forty percent? Such information would help researchers make more informed decisions.
Press our vendors
Panel quantity is important, but so is its quality, with both being reasons for choosing or not choosing to use a particular panel. As researchers, we need to press our vendors for indications of sample quality. We need to know what we are buying. If we do not like a panel’s quality then we should either move on to another panel or take a shot at recruiting via the general population. Why pay more (dollars and time) for unknown quality?
Panel providers could easily provide researchers with a sense of a panel’s quality by providing some basic validation metrics, such as:
- the percentage of panelists who have verified or updated their profile within the past month, past three months, etc.;
- monthly dropout and new-member rates;
- panelist response rates, i.e., percentage of respondents who respond to survey invitations as well as qualify for studies targeting the particular panel;
- panelist activity rates, e.g., range and mean number of surveys panel members have completed within the past month, past three months and past six months;
- panel inactivity rate; and
- perhaps even a client rating system.
More and better information
The ongoing economic downturn with its high unemployment rate may be attenuating the benefit of profession-based panels to researchers, especially for acutely affected industries. Nevertheless, such panels are a valuable research tool, possibly even more so during this economic downturn - concentrate dollars where you have the greatest chance of success. To take advantage of the benefits of these panels, however, we need more and better information about them to gauge their value to us. A starting point would be incorporating some of the aforementioned suggestions.