Love for sale
Editor’s note: Nate Hardy is CEO of Plus Sign Market Research LLC, Media. Pa.
As a researcher, you want to keep response rates up, panelists happy and recruits eager to participate in more of your surveys. However, competition is ever-increasing for panelists’ time, in the form of other research companies, the media, blogs, direct mail and e-mail, to name a few. How do you combat it all? Increase respondent satisfaction.
To find ways to do that, I analyzed metrics, comments and ratings for thousands of surveys over the years. While I and others expected more incentives to be the answer to increasing respondent happiness, my analysis revealed a number of satisfaction drivers that didn’t cost any extra money to improve.
“Can’t buy me love” is how the Beatles song goes. Satisfaction is not all about price (as marketing research often proves). So instead of giving away more money to your research subjects, try this list of 10 ways to increase respondent satisfaction:
1. Make surveys engaging and convenient. Although the content and subject matter may be wildly interesting to you as the researcher or the client, respondents may not feel the same way. People don’t like to be bored. And what do people do when they get bored? They pay more attention to time (I’ll discuss more about time later).
How do you prevent respondent boredom? According to my research, respondents who reported lower satisfaction complained about survey content. Pep up the content with more images and color. Multimedia studies averaged higher satisfaction scores than other studies.
Remember the competition - other research companies, TV, YouTube, blogs, mail. You have to deliver a worthwhile experience that can generate enough interest for a complete survey. While you’re sprucing up the survey, punch up those headlines for your recruiting invitations. Instead of boring run-on subject lines, make them five-word attention-grabbers like the media and advertisers do.
Convenience drives our society, too. The more convenient it is to enter and answer a survey, the more satisfying it is.
2. Avoid complex questions. They require more thought and work. The more thought and work required, the more time-consuming and inconvenient the experience. Soon, you’ll have a dissatisfied respondent.
You can prevent questions from being too complex by eliminating some research-related work. For example, notify panelists of items that are needed to complete the study prior to its start to make participation easier. If you already have certain information on panelists from another data source, simply append it to the study data and do not ask for it in the survey.
Removing complex questions may cause the most problems for statisticians, who require complex point-allocation and choice-modeling questions for their data needs. A balance must be struck to ensure that respondents don’t have a negative survey experience.
Complex questions can reduce clarity and increase confusion, leaving the respondent unsure how to answer. It’s best to stick to concise wording along the lines of laser questioning (who, what, where, when, why and how), which asks for specific information about an object without using extra clauses or explanations. This type of question elicits better answers from respondents.
3. Limit questions to less than 50 words. Question length is a significant driver of survey satisfaction, which declines more quickly after crossing the 50-word mark. Shorter questions are less complex and look easier to answer. For online surveys, many market researchers put only one question on a Web page - now that’s what I call using white space.
The 50-word limit puts pressure on shortening those long lists of answer choices we’ve all seen before. Answer choices alone can easily run over 20 words. Reduce answer choices, or the text for each one, to give respondents less reading work to do.
4. Avoid long lists of rating questions. We’ve all seen these before. They’re often repetitive, intimidating masses of words that leave little white space on the page. I’ve seen surveys in which a one-page block of over 40 rating items was cycled repeatedly for several pages, giving panelists a total of nearly 200 items to answer - and they still had dozens of questions left in the survey.
Based on a separate study I’ve done on data quality, I’ve found exhaustive rating lists raise the possibility of respondents entering bad answers just to get through the survey. Bad survey design = bad data.
To avoid this, break long lists into smaller parts. For online surveys, make lists continue onto another Web page. If that’s not enough, you could break the whole study into two surveys.
5. Avoid open-ended questions. They take more time, thought and work to answer. In addition, the time it takes to answer open-ended text questions is harder to estimate, leading to less-accurate advertised survey times and, consequently, more disgruntled respondents when their actual completion time is longer than promised.
Granted, qualitative data is needed for many studies. However, if you can brainstorm answer choices and create a closed-ended question, you’ll likely be better off. The fact that open-ended questions are more time-consuming and costly to analyze is another reason to avoid them.
6. Avoid repetitive questions. Another complaint linked to low satisfaction is repetition. Answering questions that seem to be the same or seeing the same text over and over can frustrate survey takers.
Repetition is a necessary evil for certain studies but it can be minimized. Questions can be worded or positioned in a way that they don’t make panelists think, “Hey, didn’t I just answer that question?” If other questions already cover the repetitive question’s objective, cut the question. If it can’t be cut, space it further apart from the others.
7. Advertise accurate survey times. The length of a survey is not a factor for dissatisfaction. Respondents don’t mind taking surveys that require a lot of time as long as you tell them the correct time expected. I compared surveys over a half-hour long to much shorter surveys and found no difference in satisfaction. Yet when I compared surveys that ran at least 10 percent over the advertised time, satisfaction went down significantly for both long and short surveys.
Respondents become upset when you violate their expectations. You must manage their expectations and inform them appropriately so they can make solid participation decisions. Even inaccurate completion status bars disappoint online survey takers.
To improve satisfaction, obtain a realistic estimated time range to advertise. To cut down on survey completion time, make sure every question is clear and not too wordy, as mentioned earlier, and reduce open-ended and complex questions. Improving your completion status bars is a function of your survey programmers and software vendors. Talk to them to see what upgrades they can install to address this.
8. Pay incentives promptly. It can take new panelists a month or so before they learn exactly what to expect regarding your payment schedules. After this honeymoon period, things get tougher. Make sure you have a reliable schedule and stick to it, or you’ll have panel retention problems.
Panelists talk to each other about the work and incentives involved for research studies. There are a variety of informal forums online where panelists compare research firms, incentive rates, etc.
Like the satisfaction issues surrounding survey times, respondents become upset when you violate their payment expectations. Do your payments arrive on time? Is your competition paying monthly, weekly, instantly? You need to find out.
9. Pre-test surveys thoroughly. This should go without saying, yet there are researchers who don’t go far enough during pre-testing. Untested surveys can cause a host of problems not only with panelists but with end-user clients as well.
Poor pre-tests can result in reduced data quality; uninteresting, inconvenient surveys; confusing and complex questions; inaccurate advertised survey times; and too-low incentives based on these times.
10. Get feedback on surveys. Your panelists are like customers and employees, with similar marketing and motivational issues to address. You have to rely on word-of-mouth and direct advertising to get them in the door. And you have to pay them to work on deliverables for your clients.
As with any organization that keeps tabs on customer and employee satisfaction, survey your panelists. Have them rate your studies and incentives and anything else that impacts respondent satisfaction. The results will help you design loyalty programs and other efforts to increase retention and response rates. Besides, it’s cheaper to keep panelists you have than to replace them with new ones.
What if you don’t have a panel and only deal with respondents on a per-project basis? Survey them anyway. You can tack on two or three rate-the-survey and comments-and-suggestions questions at the end of each survey you field. Make it a standard practice for all of your surveys, for panelists or non-panelists alike.
Buy their attention
If you act on the 10 items above, your good respondents will love you. And when an enticing competitor tries to buy their attention away from your study, your respondents can truly say, “Can’t buy me love.”
10 Ways to Increase Panel Respondent Satisfaction
1. Make surveys engaging and convenient
2. Avoid complex questions
3. Limit questions to less than 50 words
4. Avoid long lists of rating questions
5. Avoid open-ended questions
6. Avoid repetitive questions
7. Advertise accurate survey times
8. Pay incentives promptly
9. Pre-test surveys thoroughly
10. Get feedback on surveys