Best practices for designing surveys that deliver quality data
Editor’s note: Chris Handford is the director and co-founder of Waveform Insight. This is an edited version of an article that originally appeared under the title, “10 ways to ensure your online surveys deliver quality data.”
Surveys remain an invaluable tool in helping brands understand their audiences and find new opportunities. As creating online surveys becomes more democratized, there is an increasing danger of poor question design and UX leading to bad data. Bad data can lead to bad decision-making and costly misdirected choices.
In this post I’ll look at 10 common pitfalls that I see regularly, and increasingly, see in survey design.
Have we really had enough of survey experts?
In the early 2000s, creating and launching online surveys was typically the preserve of highly trained researchers. Starting out in one of the big research agencies of the day, you would put in months of extensive training before being let loose on launching a live survey. Extensive training in different question types and their purpose (categorical, dichotomous, ordinal, Likert, semantic differential, conjoint, etc.), question psychology, sampling techniques, survey UX, cleaning and processing of data were all the norm. Good survey design was both a science and art that needed years to master.
Today, anyone can launch a survey. No data science training is needed. No survey craft to learn. Just a credit card and your choice of many self-serve survey platforms. But as great as removing barriers to research is, with more surveys comes more bad surveys. And more bad data. Ask the wrong type of question – or ask a question in the wrong way – and the resulting data will lead to poorly informed (and potentially costly) decisions.
But many survey errors are easy to spot and avoid if you know what to look for. In this article, I’ll share a list of 10 survey design tips for delivering good data.
1. Avoid subjective or unclear answer lists and scales.
Watch out for fuzzy answer lists, including numbered rating scales, star ratings or answers using words like “regularly” and “occasionally.” These types of answers are all open to interpretation and, despite widespread use, should be avoided whenever possible. If your answer list isn’t fully defined, or it could mean different things to different people, it’s going to be hard for participants to answer accurately and equally hard to make sense of the resulting data.
Define every potential answer in a list or scale shown to participants. This will make it easier for the participant, researcher and research audience to understand. If you can’t do this then the question is either incomplete, unbalanced or has redundant answers – so it is time to redesign.
The most familiar survey question of all, the Net Promoter Score (NPS), is a major offender here. Despite its success, it’s something of a poster boy for bad question design. Most of us will have completed an NPS survey question where we are asked to rate likelihood to recommend a product or service on a scale, where 10 is “extremely likely” and zero is “not at all likely.” In most cases, the rest of the scale is not defined, with no indication that seven points on the scale from zero to six are negative (detractors) and just two, from nine to 10 are positive (promoters). Most people would logically think a recommendation score of eight out of 10 is a very positive result. But with NPS, your score of eight is considered passive, it has no real value. While stakeholders might demand NPS as a KPI, you may want to reconsider making any big decisions based on NPS alone.
How do we fix it? Taking the standard NPS question (see QA below), I suggest an alternative that defines every answer point, uses a balanced answer scale, removes some of the unnecessary granularity in an 11-point scale and uses “definitely” to provide a clearer interpretation of the intention to recommend. I have also provided context for the question wording (see tip seven).
A typical NPS advocacy question:
QA: How likely is it that you would recommend brand X to a friend or colleague?
0 (Extremely unlikely)
1
2
3
4
5
6
7
8
9
10 (Extremely likely)
Advocacy (asked a better way):
QB: If you were asked to recommend a [insert category/purpose], how likely would you be to recommend brand X?
I would …
Definitely recommend
Probably recommend
Be unsure whether to recommend
Probably not recommend
Definitely not recommend
2. Balance your answer scales.
Any good survey question should aim to collect unbiased data, and question design should not steer the respondent to answer one way more than another. If you’re asking people to rate the quality of a concept, product, experience or brand, they should be given an equal chance to answer positively and negatively.
But more and more we’re seeing this fundamental of good survey design being overlooked or worse, purposefully gamed. There is an increasing trend of scales biased toward the positive across customer satisfaction studies, purchase intention questions and opinion polls. When you see more positive answer options than negative you should always take the resulting data with a pinch of salt. Think of it as having your homework marked but the teacher can only choose an A or B.
Let’s look at a real-life example (see below: QA) used by Google to ask users of its news feed whether the content shown fits with users’ interests. You don’t have to be a trained quant researcher to pick up a couple of things that are wrong with this question. The first is the obvious bias toward positive answers (three options) vs. negative (one option). The other is the use of excellent and great. Having both only makes sense if there is an attempt to boost positive responses.
QA. How is this recommendation?
Excellent
Great
Good
OK
Bad
How do we fix it? There are a few different ways you could balance a scale like this. I prefer either a five-point Likert scale (QB) where OK sits in the middle or, if the UI allows, a four-point scale tailored more to the relevance purpose of the question (QC).
Question asked a better way – option one:
QB. How is this recommendation?
Excellent
Good
OK
Poor
Very poor
Question asked a better way – option two:
QC. How relevant is this story recommendation?
Very relevant
Quite relevant
Not very relevant
Not at all relevant
3. Remove leading questions from surveys.
Leading questions push participants toward answering in a certain way. Often these are innocent, coming from an unconscious bias based on being too close to the product/service/brand being researched and/or a lack of research expertise. I’ve seen leading survey SaaS platforms that come loaded with biased templated questions that will be used without much thought (e.g., How well do our products meet your needs?). At worst, these biased questions can be designed to deliver a preferred response for a research sponsor.
A recent example I saw was a survey that asked the following question to visitors of a well-known car review website: “How long would you be happy to wait for a car you wanted to be delivered from a dealer?” The answer scale started at one month and went all the way to 12 months. Personally, I wouldn’t expect to wait more than one week for a car delivery, but the shortest time frame forced me into saying one month. There was no option to choose anything shorter. You can see already see the chart headline: “100% of car buyers are happy to wait a month” for their cars to be delivered. Nope. They are not.
Making important decisions based on (mis)leading data is not going to help anyone in the long run. Good research should always seek an independent view, with research clearly separated from the team responsible for creating a product, service, advert, UX/ CX, etc. If that’s not possible for you, check your questions to make sure they are not subject to bias (conscious or unconscious) and test your surveys in person with other people to make sure the answers they want to give are available.
Below are a handful of leading questions (QAs) I have seen doing the rounds and some matched alternatives (QBs) that should be less biased.
QA1. What do you like most about this design/concept/etc.?
QB1. What, if anything, did you particularly like or dislike about this design/concept/etc.?
QA2. How well does this product/service/feature meet your needs?
QB2. How would you rate this product/service/feature ability to meet your needs?
QA3. How much would you pay for this product/service?
QB3. Would you buy product/service if it was priced at £XX/$XX? [repeat with alternate price points].
QA4. How much do you agree that [insert statement]?
QB4. To what extent would you either agree or disagree that [insert statement]?
QA5. How easy did you find it to complete this task on the app?
QB5. How would you rate the app on ease of completing this task?
QA6. How important is it to you that [insert statement]?
QB6. Is [insert statement] important or unimportant to you? How much?
4. When designing a survey, use complete and succinct answer lists.
When understanding reasons for an opinion, decision or behavior, surveys will often contain multiple-choice questions with a list of potential reasons for users to choose from (e.g., What’s most important to you when choosing which mobile phone handset to purchase?). These types of questions often take the most thought and insight to compile. They need to be derived by hypotheses and insight. Often survey writers don’t do enough thinking, or research, and miss potential key drivers. Others go to the far extreme and create a wordy list of 20 options to choose from. Both approaches deliver poor data. A balance is needed – you want to cover the most important themes and discover new trends but if you go too granular, no participant will bother to read all your possible answers.
How do we fix it? To generate a complete answer list the ideal is to use some existing insight or qualitative research to help generate the list. If time or budget does not allow for this, do some desk research, brainstorm the possible answers with your colleagues or use ChatGPT to create a starter list. Then refine the list into clear, snappy and consumer-friendly answers. Do this and you’ll probably cover most bases, and you can then fill the gaps with an open-ended “another reason” response option to make sure you haven’t missed something important, or a new trend.
While we’re looking for exhaustive answer lists, we also want them to be read. Keep answer lists to a maximum of 10 relevant options. Where possible, group answers into macro themes and paraphrase reasons rather than write out each separately with full and perfect grammar. For example, group multiple answers like “It was too expensive,” “I didn’t have enough money,” “It wasn’t good value for money,” “I was waiting for a sale,” etc., into one related answer like “too expensive/poor value.”
5. Keep the survey relevant and personalized.
The best surveys are personalized to the user, with dynamic techniques used to keep content relevant and avoid wasting people’s valuable time. This helps keep participants engaged, reduces drop-outs and improves the quality of responses. But many surveys will ask all questions to all participants and ask them in the same way. This is both a frustrating and dull experience that will lead to potentially poor data quality.
How do we fix this? The best survey platforms will have dynamic features like “piping” and “routing.” Routing ensures you only ask questions on the subjects relevant to participants. This can be done on a question-by-question basis using previous question responses or from customer data uploaded from the sample file as hidden variables.
Piping is used to reference previous answers, making it feel more of a human, contextualized and engaging process – more like a conversational interview. At a most basic level, it can be just piping in a previous answer into the question wording (e.g., Why did you choose [Samsung] for your current mobile phone?) but it can also be used to dynamically change the list of answers and context of questions (e.g., You said you were going to buy a new mobile phone [in the next six months], how likely are you to buy [Samsung] again?).
The five questions below demonstrate an example survey flow that combines routing and piping to ensure survey questions (and data) are relevant across a multi-category product survey.
Examples of routing and piping in surveys:
Q1. Which of the following products, if any, do you expect to buy in the coming 12 months?
Answer list including mobile phone, tablet, laptop and none of these. The participant chooses “mobile phone.”
Q2. Which of the following [mobile phone] brands had you heard of before today?
Pipe in category. Answer list of 10 mobile phone brands. The participant chooses five brands.
Q3. And, which of these brands would you consider for your next [mobile phone]?
Pipe in just the five brands chosen in Q2 as the list of answers. Participant chooses “Samsung.”
Q4. Which of the following models of [Samsung] [mobile phone] do you find most appealing?
Show model list/image selection of Samsung phone models.
Q5. Why did you choose the [Galaxy Fold] as the most appealing [Samsung] [mobile phone]?
Pipe in chosen model name, image and product category.
6. Surveys should have consistent, simple and familiar language.
Using simple and clear language helps ensure participants quickly understand the question’s purpose and provide accurate responses. People will avoid reading long-winded or complicated text that adds to their cognitive load. It’s always best to keep question and answer wording short and succinct.
It’s easy to become too close to a subject and assume your survey participants will know the latest acronyms, innovations and category quirks. But it’s best to avoid jargon or industry speak with general consumers. Examples of things to avoid can come from the market category (e.g., Internet of Things, SaaS), business language and acronyms, and even research survey jargon itself (What does “somewhat applies” actually mean?).
It’s also important to use familiar language. Asking frequency of use as, “How often do you [insert activity]?” is much simpler than, “With what frequency do you [insert activity]?” Market researchers can be guilty here. Take the “other (specify)” answer that will appear in many surveys. It means nothing to the average person but is very common today in online surveys. To fix this, use contextual answers like “another reason (please tell us)” or “in another way (please tell us),” which makes more sense and encourages better engagement with the question.
7. Offer context and explanations within the survey.
Let’s go back to the NPS score. Another one of its failings is that the question is typically not framed in a scenario. It usually just jumps in with a question: “How likely are you to recommend Microsoft?” Without a built-in scenario, I’m always going to give a low score no matter how great the brand is. People just don’t go around recommending brands. But it would be a different result if the question read, “If you were asked by a friend which brand they should use for creating a spreadsheet, how likely would you be to recommend Microsoft?” In that case, I’d be much more positive and probably give a strong eight (not that it would help boost the NPS score!).
How do we fix this? Whenever possible, a survey question should be based on actual behavior. Including a specific and framed scenario will result in better quality data than asking a question without context or based on behavior assumptions.
8. Ask simple questions.
Sometimes what we’d like to know as researchers can’t be framed or answered in a simple survey question. We know that people don’t always behave as they say they do or will in surveys. But this is not due to some attempt to mislead or misdirect. It’s more likely that the survey questions have unrealistic expectations of what a participant can accurately know, calculate, recall or predict.
Questions that involve getting out a calculator, bank statement, comparison website, diary or even a proverbial crystal ball should be avoided. Below are some examples of the types of questions that should be dismissed:
Q. How much did you personally spend on technology products in 2022?
Issues: Time period is too big, too long ago and category is undefined.
Q. How much more or less do you expect to spend on technology products in 2023 vs. 2022?
Issues: Difficult to predict. Difficult to calculate.
Q. How many hours do you expect to use social media this month?
Issues: Difficult to predict. Difficult to calculate. Lack of behavioral awareness.
Q. How often do you check your social media feeds each day?
Issues: Micromoments are often not recalled. Answer will be different day to day.
Q. Which of the following product ideas do you think will be most successful?
Issues: Assumes market knowledge. No context on pricing, sales channels, etc.
Q. How often would you buy this product if it was available today?
Issues: No competitive or promotional context. Impossible to predict.
Typically, these types of questions should be avoided altogether but sometimes there are workarounds:
- Instead of questions that are about future prediction ask about behavior in the recent past. For example, “How many hours do you plan to watch streaming video services this weekend?” vs. “How many hours did you spend watching streaming video services last weekend?”
- Try to break time periods into smaller chunks to get a more accurate response. For example, “How much time did you spend on social media yesterday?” vs. “How much time did you spend using social media in the past week?”
- Ask about personal preferences over market predictions. For example, “Which of the following do you find most appealing?” vs. “Which of the following do you think would sell the most?”
9. Don’t let quality checks become quality fails.
Understanding and adapting to learned behavior is crucial in good UX. People expect sites and apps to work the same way as others they have seen and used. Online survey completion is no different.
Building in too many clever quality checks and ringers to catch out rogue survey participants can sometimes do more harm than good. Rotating answer scales (e.g., putting negative answers first instead of the expected positive) is common practice in survey design to check people are paying attention but this can backfire. Customers will naturally expect the first answers to run positive to negative – top to bottom, left to right. Mess with this expectation and some people will unintentionally give the wrong answer. Follow-up open-ended questions we’ve asked when people rated an experience negatively (on a rotated question) consistently highlight this as enough of an issue for us to ditch what we once thought was good practice.
Use rotations sparingly and do not flip scale order with the same participant in the same survey. When using questions designed to catch untrustworthy participants it’s best not to use an important question or KPI. Additional steps such as asking the same question in two different ways at start and end of the survey, using CAPTCHA-inspired image checks and monitoring response speed will help filter out bogus responses without ruining crucial data. But even then, use these sparingly to not patronize, confuse or frustrate survey participants.
10. When designing a survey, follow UX best practices.
Like any online experience, following UX best practice, heuristics and testing is crucial for a good survey experience and accurate engagement. Surveys that are easy to comprehend, quick to complete, visually appealing and aligned with expectations will lead to higher response rates and more insightful, accurate data.
Always aim to do the following when designing surveys:
- Think mobile-first but be responsive: Any good survey will aim to be inclusive within the target audience. Minimizing participant restrictions based on device, time or location of completion is essential for this. To achieve this, create responsive surveys that automatically adjust to device/screen size. And with two in three survey responses now typically coming from mobile, designing for and testing on mobile is the preferred route.
- Keep the user interface clean and simple: A minimal but engaging UI that puts the focus on the questions, answers and content being tested will perform best. Avoid cluttering the screen with too many elements, using background images and excessive branding. Use a clear font and ensure a good contrast between text and background.
- Use images to support easy completion: Visual aids such as images, buttons, icons and text enhancements, when used well, can enhance understanding and remove cognitive effort. For example, using icons to represent product categories, brand logos instead of names, button-based answers instead of text checkboxes and bold text to signify important elements will all help.
- Help users to orientate the survey: Tell users how long the survey will take them. Break up the survey into sections with a clear purpose and transparently guide people through completion (e.g., “To understand the needs of different people we have a few profile questions”). Use a progress bar or a percentage indicator to show participants how far they have come and how much longer they need to complete the survey.
- Trial and test the survey: Always test the survey with a small group of people, across a range of devices and screen sizes, to identify any usability issues and ensure that the survey is user-friendly. Your testers should ideally include people who represent the target audience and are not part of the project or client team. Gather and act on feedback on navigation, comprehension and use this to optimize the experience.
When survey designers follow best practices, online surveys have a better opportunity to deliver trusted and insightful data, helping uncover new opportunities, unlock potential and provide better decision-making. Surveys can provide rich and robust data based on the people who matter most – your audience.