Ensuring reliable online survey results
Editor’s note: Robert Hammond and Claudia Parvanta are the lead co-authors of Caught in the Act: Detecting Respondent Deceit and Disinterest in On-Line Surveys. A Case Study Using Facial Expression Analysis. Hammond is a marketing instructor and director of the Center for Marketing and Sales Innovation at USF. Parvanta is a professor and social marketing concentration lead in the USF College of Public Health.
Online surveys are often used as a key source of customer insight that can help inform and drive important business decisions. Entrepreneurs use them for initial market research. Developers use them to shape and enhance products and services. Marketers use them to refine branding and advertising strategies.
While they’ve become a standard research tool throughout the entire business life cycle, and increasingly so in a post-pandemic world, it’s important to remember that their value as a strategic compass is only as accurate as the data they yield – or further yet, the inputs that make up that data. And that's where the real challenge is. Even putting aside, the fact that as much as 95% of human decision-making happens below conscious awareness (of which surveys cannot measure), surveys alone and without precautions can be subject to disinterest and deceit. Both of which can cause significant missteps for a product, a business or even an entire industry.
Take our recent study as caution. In 2020, we began a methodological study for the Florida Department of Health seeking to evaluate the effectiveness of anti-smoking public service announcements (PSAs). We planned to use technologies in our lab that could evaluate nonconscious responses to these PSAs. When the pandemic hit, iMotions helped us transition to an online platform for facial expression analysis and attention (measured by head position), as well as survey questions on the perceived effectiveness.
The data, or lack thereof for certain respondents, made us suspicious. On inspection of participants' facial recordings, we discovered that a large percentage of the responses were fraudulent. In one test group, more than half (58%) of responses were submitted by an unknown number of individuals who used VPNs to side-step the “ballot box stuffing” feature. Others substituted recorded videos or photographs for their own facial videos required as part of the study or switched off the lights in their rooms to avoid being seen. In another test group, data based on head positioning indicated a lack of attention to the survey stimuli, but the respondents still completed the survey.
If these fraudulent responses were not removed from the data, the subjective questions on perceived effectiveness (PE), which used standardized measures and five-point scales, would have been wrong. When ranking the PSAs by their PE, six of 12 videos studied would have been misjudged. More critically, two of the three top PSAs selected by the valid respondents had Black lead characters. Those same PSAs were not ranked in the top three based on the sample that included deceitful responses.
Fortunately, we caught them. But it leads us to wonder just how many researchers or marketers rely on data from online surveys for studies or critical business decisions?
Surveys, validation and quality data
We are not suggesting that online surveys can’t be valid or worthwhile. But like any method of studying human behavior and responses, there are strengths and weaknesses. Failing to recognize those, and account for them, leaves a researcher open to chance.
The three key takeaways from these findings:
First, researchers must consider the implications of participant compensation. Unless potential respondents are very engaged in an issue, it is nearly impossible to get responses without some incentive. We did find that vetted panels, which work out compensation with members on an individual basis, provided fewer fraudulent responses. Community recruitment allowed for the survey link to escape into the wild, where the fraud likely occurred. But there was more attention paid by valid community members. Each recruitment channel has its pros and cons.
Second, researchers should implement measures that protect the quality of data. Biometric tools, which are easier and more cost-effective than ever to use and integrate, can help focus on the data that matters. They can be helpful in identifying and removing fraudulent data, which is crucial to business decision-making. Failing to successfully exclude deceitful respondents could potentially set research, branding and/or marketing endeavors on the wrong path to the detriment of time, cost and effort.
Third, researchers need to acknowledge the critical importance of understanding nonconscious responses. After all, a majority of decision making occurs unconsciously – putting a premium on being able to understand what a subject can’t or won’t tell you.
Let this serve not only as a cautionary tale but set a precedent moving forward for judicious use of compensated online surveys, with several layers of validation, to inform your most important business decisions.