Editor’s note: Darrin Helsel is the director of quantitative programs at Vivisum Partners, a Durham, N.C., research firm. He is also co-founder and principal for Distill Research, Portland, Ore. This article is an edited version of a blog post that originally appeared on the Vivisum blog page under the title “Junk in, junk out: the consequences of poorly-designed survey instruments.”
Poorly-designed survey instruments will yield less-than-reliable data. There. I said it. DIY platforms are springing up all over, allowing any Tom, Dick or Harry to throw a survey to the ether to collect data to inform their business decisions. What a thrill it is to collect this data for pennies per respondent! However, unless your questionnaire is designed well (which I’ll explain in a moment), the data you collect could be next to useless. Or worse, it could be just plain wrong.
We all follow our own nature and, for many, the occupation represents what’s in one’s own nature to do:
• For a marketer, their job is to educate and inform the customer or prospect of their company’s value proposition. Hence, when commissioning or conducting research, it’s in their nature to successfully position the value proposition of the product or service they’re marketing, regardless of the goal of the research.
• For product designers, it’s in their nature to create based on the input that informs their inner muse. So when commissioning or conducting research, it’s in their nature to collect data that supports their own muse, regardless of the goal of the research.
• For product managers, it’s in their nature to shepherd their products to market, managing costs and processes to get them to market as efficiently as possible. So when commissioning or conducting research, it’s in their nature to minimize impediments to their process, regardless of the goal of the research.
Market research, by comparison, is guided by the scientific method. It’s in a researcher’s nature to ask questions in a detail-oriented, scientific fashion. As we know from middle-school science class, the scientific method is a system by which curiosity is organized through experimentation to disprove a null hypothesis. In so doing, the researcher follows a methodology to ensure that the experiment is repeatable with the same subjects and reproducible with a new set of subjects.
Repeatable. The case wherein if the same subject is asked the same question six, 24 or 48 hours later, the answer will be the same.
Reproducible. The case wherein the same survey instrument asked of a different population, though with the same sample parameters, provides the same proportion of responses.
Hence it’s in the researcher’s nature to ask questions and record answers using a methodology to ensure valid data – data that’s not overly pedagogical for the marketer; data that represents what the market thinks of a product’s design, regardless of whether it fits with the product designer’s musings; and data that may disrupt the processes of the product manager. It’s representative of a given market or audience; it’s unbiased and objective; and it is repeatable and reproducible to demonstrate its validity.
To ensure these qualities in the data, researchers put great emphasis in the questionnaire design. Why? We have a saying: junk in, junk out. Without a quality design that follows best practices, we can’t ensure the quality of the data on the back end of the study. Here are five (of the many) best practices we follow when designing questionnaires:
Don’t confuse your respondents. This seems like a no-brainer but you’d be surprised at how many non-researchers do this effortlessly. For instance, an easy way to confuse respondents is by forcing them to pick a single response when more than one response describes them or their experience.
It’s called cognitive dissonance, coined by Leon Festinger in the 1950s in the field of social psychology, wherein a person experiences the mental stress and discomfort experienced when they hold two or more contradictory beliefs, ideas and/or values at the same time. In the area of survey science, two outcomes can be the result of cognitive dissonance: 1) respondents get frustrated and quit the survey, lowering your response rate and risking unmeasured bias of your results; or, worse 2) they get frustrated and angry and populate your survey with bogus answers. Hence, great care is required to create response lists that are mutually exclusive and represent options that describe the experiences of 80 percent of your respondents. The other 20 percent is typically reserved for “Other, specify” write-in responses.
Know what you’re measuring. Like muddling response lists, knowing what you’re measuring also entails avoiding double-barreled questions. When you’re asking a question that incorporates two or more phenomena to be measured, which one is represented by their response? A good rule of thumb is 1:1 – one question, one metric.
Ground behavioral questions in a distinct space of time. Prior to the emergence of big data, which measures behaviors within a given sphere (credit card transactions, phone calls, interactions with health care professionals, etc.), much of our behaviors required asking questions in a survey. The pitfall of this can be our notoriously faulty memories. Commenters in numerous fields have pontificated on the personalization of memory: as soon as we see or do something, that action gets interpreted by our brains and it’s this interpretation that makes up our memory – not the action itself. This process is particularly noticeable in actions that extend further and further away in time. Hence when asking about a behavior, it helps to ground the question in a time frame that’s as immediate as possible, while balancing the probability that they’ve done enough of those behaviors in that time frame to collect useful data. For small behaviors, a day, a few days or a week may be a suitable amount of time. For bigger behaviors, one, three or six months may be more appropriate. Avoid asking about “average” behaviors like you’d avoid a zombie apocalypse.
Ask questions that your respondents can answer. By that I mean, if they’ve indicated they’ve never used a product, don’t follow up with a question about their satisfaction with said product. Most, if not all, Internet survey platforms come with the capability of filtering. Filter out respondents who shouldn’t be asked a question given their previous responses. You’ll minimize frustration and maximize the validity of the data you collect as a result.
Seek opportunities NOT to bias your respondents. Biases, both measured and unmeasured, can be the bane of your survey data. One source of bias that’s easily accounted for and rectified can be found in the way you phrase your questions. Rating questions, for instance, are easily susceptible in being asked in a biased way. As a rule of thumb, for instance, always mention both ends of the scale in the way you phrase the question so that, even unconsciously, you permit the respondent to consider both sides. By only mentioning one side, it’s almost as if you control their eyes: they immediately seek the side of the scale you mentioned and select their preferred answer.
Just as each occupation follows from each person’s nature, it’s also part of our shared DNA that we respond positively to content that resonates with us. That is, we seek to understand the world in our own image or experience. When presented with a question, we seek to find our own answer in that question. It’s how we have survived these millennia – by finding a common language by which to create community. We learned early on that there’s power in numbers. These best practices will help you collect the repeatable and reproducible numbers you need to make the decisions you have to make.