Editor’s note: Rasto Ivanic is CEO of GroupSolver, a San Diego, Calif.-based market research firm. This is an edited version of a post that originally appeared under the title, “Protect the quality of your survey data from speedsters, cheaters and bots by implementing smart trap questions.”
Gathering high-quality responses from online surveys is a fundamental premise of building customer insights and making sound business decisions. Achieving this premise, however, can be a major challenge. This is especially true when respondent identities can’t be verified for privacy reasons or when they are sourced from broad general audience panels. While collecting data from a reputable sample provider and using good survey design techniques goes a long way toward ensuring data quality, it is often not enough. Inevitably, some survey respondents will set out to complete survey questions as fast as they can, particularly when a monetary incentive is involved. Not filtering out unqualified, inattentive or fraudulent respondents will at the minimum add unnecessary noise to collected data and at worst invalidate your survey’s findings.
Based on our data, implementing trap questions and attention screeners has resulted in catching and removing 15% of respondents, on average, with some surveys seeing levels of unqualified respondents rates well over 30%. In this article, I will share tips for minimizing the impact of such respondents.
Implementing trap questions
It is easy to establish relatively straight-forward study design techniques that can help manage the quality of survey respondents. One of them is inserting attention checks and trap questions at strategic points in the survey. Trap questions, or attention checks, are questions that are inserted into a survey and serve to filter out respondents who are not answering honestly or carefully. Such questions should be easy and obvious to answer, as they are not meant to test or trick the survey respondent’s knowledge. I recommend using trap questions in every survey. In my experience, even high-cost specialized panels suffer from inattentive respondents.
Attention checks can be performed using both open-end and multiple-choice questions. Both types can work to a great effect, but multiple-choice questions are typically a little bit easier to implement. Building quality checks into a survey using open-ended questions typically requires the use of Natural Language Processing. But there are various styles of trap questions that can be used in any survey and which are tailored to test different types of problematic respondent behaviors.
Catching low-quality respondents
My team recommends using more than one trap question in surveys. The reason for that strategy is the simple fact that even respondents picking answers at random get them right sometimes. Implementing more than one attention check diminishes the chances of an inattentive respondent completing the full survey. Based on data from a sample of our recent studies we are catching almost 12% of all survey takers on the first trap question in a study. If we ask subsequent traps, an additional 7% of respondents are caught by the second and just under 5% are caught from the third.
For example, we deployed three trap questions in our recent Election 2020 study (N = 332, general population sample from a panel). One trap question asked: “With which species do you identify?” and presented the choices as Rock, Bunny Rabbit, Human, Fish, Magic Carpet or Vampire. Any respondent who is paying attention would choose the obviously correct answer: Human. But 68 respondents answered the question incorrectly and were terminated from the study.
In the same study, we also asked: “Starting counting with Monday, what is the third day of the week?” The choices were: Seven, Saturday, Wednesday, Excellent or Not Sure. The correct choice, Wednesday, is only easy to answer if one took the time to read the question and the options, especially since there is a random mix of choice options listed. Eighteen respondents answered the question incorrectly and were terminated from the study.
These trap questions along with one other terminated a total of 183 respondents, or 35% of study participants. While this was an unusually high rate of bad respondents, it demonstrates the importance of quality control measures inside the survey.
Safety net for data quality
Market research takes time, money and effort, and low-quality or dishonest survey answers can quickly ruin integrity of insights. Ensuring that survey responses come from qualified and attentive respondents is essential when we know that collected data informs critical business decisions. While no technique is 100% effective, using even the quick and simple strategy of deploying trap questions will help improve the quality of data and give decision makers the confidence that the insights they rely on are valid.