Evaluating the survey participant experience

Editor’s note: Ben Tolchinsky is owner of CBT Insights, Atlanta. 

When was the last time you participated in an online panel survey?

For me, it had been a while. Early in my career, in the mid-2000s, I joined a number of online panels so that I could stay “on the pulse” of online surveys and the industry. I developed three beliefs from this experience:

  • Surveys are too long.
  • Surveys are boring.
  • Surveys are unrewarding.

I’ve maintained these beliefs over the years, not having encountered anything that would change them. However, despite my beliefs and best efforts, I’ve contributed my share of lengthy and boring surveys. Since that early experiment, I occasionally participate in surveys, mostly from companies of which I’m a customer.

Because of my beliefs, I’ve often wondered and worried about the health of the lifeblood of our industry: survey participation. Despite my concerns, my sample needs have always been met.

Early in 2024, I had an itch to evaluate the survey participant experience once again. Has it changed? Has it improved? Are surveys shorter, more enjoyable and more rewarding than they were in the mid-2000s? I wasn’t optimistic.

To help answer my questions, I joined 16 panels (see Table A), and over the course of many months, I participated in hundreds of online surveys on my desktop PC, laptop, iPhone and iPad. All surveys were taken on the Microsoft Edge browser, and I did not use any survey apps.

In this article I’ll look at each experience, in order, that a panelist encounters when participating in an online survey. Although the sample size of surveys taken is large, I did not record and quantify my experience. My conclusion is based on a qualitative assessment of my collective experiences.

Survey recruitment

Much like years ago, I received e-mail invitations to take surveys. And much like years ago, the number of e-mail invitations varied widely by panel, with as few as one per week to as many as 80 per week (see Table A). From my participant perspective, fewer invitations made me feel “fortunate” to have been invited to the survey, whereas the larger number of invitations seemed excessive to me. Others may appreciate the frequent notifications of survey availability.

However, unlike my experience from years ago, the invitation rarely took me to the survey advertised in the e-mail, instead placing me in a “router” that would either find a survey for me in about 15-to-30 seconds or take me to a set of general screening questions before putting me in the router.

I should note that two of the 16 panels operated more like those from years past. For these two, I would receive an invitation for an advertised survey and then be taken directly to that survey.

However, unlike in the past, I would be presented with another survey opportunity if I was disqualified.

The other unique aspect of the recruitment process for me was that most panels have a survey dashboard that I could visit at any time or be returned to after completing or disqualifying from a survey. On the dashboard, I could choose from several surveys ranging in length and reward. If I wanted a shorter survey, I could typically find one with a smaller reward. And if I was ambitious or motivated by larger rewards, I could typically find one ranging from 20 to even 45 or 60 minutes. Dashboards can also stimulate participation in specific surveys by increasing the number of rewards points per minute, which are easily comparable across surveys.

I recall many complaints in the past from panelists who did not like being disqualified from a survey and not having other opportunities until a new invitation arrived (of which the same experience would often occur). Although there are pros (maximizes panelist utility) and cons (impact on sample representativeness) to dashboards for researchers, today’s recruitment approaches have eliminated this participant pain point.

Table A PanelRouterDashboardAverage Number of E-mails/WeekSmallest RewardNumber of Surveys to Earn Reward Panel 1YesYes4.4$5 Gift Card/Cash8-10 Panel 2YesNo15.5$5 Gift Card8-10 Panel 3YesYes23.8Unclear20-30 Panel 4NoNo (a)6.7Cash/survey1 (b) Panel 5NoYes15.1$5 Gift Card4-8 Panel 6NoYes17.5$15 Cash15-25 Panel 7YesYes9.7$10 Gift Card10-20 Panel 8YesYes28.7$55-10 Panel 9YesYes0.5$25 Gift Card25-35 Panel 10YesYes79.9Unclear25-35 Panel 11YesYes14.0$5 Gift Card/Cash10-20 Panel 12NoNo2.3N/AN/A Panel 13YesYes15.1$10 Gift Card/Cash15-25 Panel 14YesYes8.0$10 Gift Card15-25 Panel 15NoNo4.5$15 Gift Card20-30 Panel 16YesYes4.4$5 Gift Card/Cash5-10


Survey screening

I recall attending conferences where panel companies and others would discuss the need to utilize panelist demographic information so that participants did not have to answer the same demographic questions repeatedly while attempting to qualify for a survey.

Unfortunately, this need has not been met.

Hence, the process of attempting to qualify for a survey requires “routing” from one survey to the next, answering the same demographic questions repeatedly until finally qualifying or running out of available surveys. It can be incredibly frustrating, even laughable at times.

There were two other interesting aspects of the screening process:

  • This is not social or political commentary, but it surprised me how many ways there now are to ask for a participant’s gender – one question had 13 options. Perhaps those who qualify for these responses are appreciative, but I think the many variations draw unwanted attention to a complex issue. Perhaps a standard will emerge.
  • Bot detectors are now commonplace, appearing in most surveys. They’re not necessarily problematic, but to those who aren’t familiar with them or their purpose might question why the survey is asking which one of the photos is an apple. At times, I had to prove I was a human two or three times.

It should be noted here that I answered every screening and survey question honestly, as a consumer, except for one. I, like most panelists, know how to dodge the security screener, and I did not give myself up each time.

Survey length

Although I don’t have data points, I believe that it’s generally agreed that many surveys are too long and that survey participants much prefer shorter surveys to lengthy surveys. In my experience, practitioners (including me) generally agree that surveys really shouldn’t exceed 15, maybe 20, minutes – largely to maintain broad survey participation and to ensure data quality. At conferences, practitioners usually debate which stakeholder – the corporate buyers, the market research suppliers, the panel companies – should own and enforce the effort to shorten surveys. A partnership among leading companies from each stakeholder is typically sought, but it is seemingly an impossible task.

Based on my experiment, I don’t believe anything has changed. While there are 5-, 10- and 15- minute surveys, there are also many 20-, 30-, 45- and even 60-minute surveys. Because of this experiment, I did not shy away from the lengthier surveys. I tried to complete them – I really did – but I often couldn’t. They seemed to go on and on with no end in sight. Furthermore, once my eyes began to glaze over and the frustration set in (around the 20-minute mark), I could no longer vouch for the quality of my responses, so it is probably better that I often gave up. If I was motivated by the incentive/reward, perhaps I would have marched on.

It wasn’t just the survey length. It was the repetitiveness, the length of the attribute batteries, the endless detailed questions about brands I know little about or everyday experiences I can barely recall. I’m a researcher, and I know why we do this, but please take a 30-minute or longer survey and see if you disagree.

Survey enjoyment

Based on my previous account, this assessment will not surprise you. When comparing my experience in the mid-2000s to my recent experience, I noticed only minor improvements on this dimension to a minority of the surveys I took. The improvements generally came in the form of easier administration of attributes and other types of batteries that require repetition.

Sometimes the attributes would appear one by one, and on occasion, I could drag-and-drop responses into rank order or onto a fixed scale.

I didn’t expect to be entertained, but I expected the experience to be an ounce or two more enjoyable (or less boring) than previously based simply on advancements in technology. I’ve seen wonderful examples of unique and/or gamified question types that are seemingly more “enjoyable” and engaging, but I didn’t encounter any of these, not once. I have little doubt that it would encourage more/repeat participation, increase the likelihood of completion and produce higher quality data – particularly for lengthier surveys. But the industry appears uninterested in making the necessary investments to achieve these benefits.

Survey quality

Survey quality was the biggest surprise for me. In my opinion, survey quality has declined. The issues that led me to this conclusion were neither egregious nor rampant. Many were of the basic variety, including misspellings, missing words and poor punctuation. More concerning issues included poorly worded questions, leading questions, unanswerable questions, confusing or incorrect scales, missing or failed programming logic and more. The most common (and frustrating) issue was the failure to include a “none of these” or “don’t know” option – when this occurred, I was forced to randomly select a response, and I shuddered at the thought of someone presenting this data.

Perhaps it shouldn’t have been a surprise. Software as a service (SaaS) survey platforms have increased in popularity due to cost and speed improvements by allowing essentially anyone in an organization to write and deploy a survey. I am not going to list the pros and cons of SaaS survey platforms, but it seems apparent to me that some, even many, surveys are not drafted by an experienced and trained practitioner. 

I notice the issues, but I suspect that most panelists either don’t notice them or don’t care. And maybe that’s the point. I’m beginning to believe that “good enough” is now the standard and that organizations believe benefits of SaaS platforms outweigh some amount of bias that no one will ever know or care about.

There was one pleasant surprise, however. Surveys taken on my iPhone browser were better than I expected. Some were optimized for the smartphone and others required a bit of pinching and shifting, whereas only a small number were no different than on a computer browser and unreadable for my worsening eyesight.

Survey incentives and rewards

As previously stated, it is my belief that surveys are unrewarding. As shown in Table A, on three of the 16 panels, it could take as few as four or five surveys to earn a $5 award. Not bad. Keep in mind, however, that those four or five surveys need to be of the 20-minute+ variety, and the participant must quality for and complete the surveys – not always easy tasks. But at least the reward feels attainable and tangible. Participants no longer have to use their earned currency for magazine subscriptions – today’s rewards are cash or gift cards (Amazon is almost always one of the choices).

As you will also see in Table A, most panels require significantly more completed surveys to earn the baseline reward. In my estimation, it is quite challenging to earn a reward, which signifies “unrewarding” or at least mildly rewarding to me.

The survey participant experience

Based on this experiment, I’ve come to three conclusions. The first is that, in my opinion, surveys remain very much the same as in the mid-2000s – too long, boring and unrewarding.

The second conclusion is more revealing. It seems apparent to me that the MR industry does not need to improve the participant experience, or it would have improved it by now. Long, boring and unrewarding surveys are completed every day, and if quotas are consistently met, why does anything need to change? Perhaps I should be pleased that the lifeblood of our industry – survey participation – is theoretically sufficiently healthy.

My third conclusion is that I must be an idealogue who yearns for something better but is not pragmatic. I get it … if it ain’t broke, don’t fix it.