Enhancing Respondent Engagement for Better Data Quality 

Editor’s note: This article is an automated speech-to-text transcription, edited lightly for clarity.    

Sago was one of six organizations that participated in the November 20, 2024, Quirk’s Virtual Sessions – Data Quality series. The presentation from the two Sago speakers focused on respondent engagement in both quantitative and qualitative studies.  

Dave Trifiletti, SVP of NA qualitative operations and Rob Berger, EVP of global quantitative argued that the engagement of respondents is a key to great data quality. The two talked about how expectations have changed in recent years and the new challenges those changes bring. Then Trifiletti and Berger gave practical tips for combating these challenges in both qualitative and quantitative research. 

Session transcript: 

Joe Rydholm 

Hi everybody and welcome to our session “Enhancing Respondent Engagement for Better Data Quality.” 

I’m Quirk’s editor Joe Rydholm and before we get started let’s quickly go over the ways you can participate in today’s discussion. You can use the chat tab to interact with other attendees during the session and you can use the Q&A tab to submit questions for the presenters during the session and we will answer as many questions as we have time for during the Q&A portion. 

Our session today is presented by Sago. Rob, take it away!

Rob Berger 

Great and thank you everybody and good morning, afternoon or evening, wherever you may be. Thank you for joining us today. I'm really excited to talk about a topic that is something that I spend, Dave spends a lot of time on around enhancing respondent engagement for better data quality and really how critical and important that is becoming in the effort to deliver stronger, more reliable and clean insights for your projects, for your clients and bringing that across in the efforts that we're all trying to undertake in this industry to really deliver quality research.  

Just before we get going, I'll start by introducing Dave. I think he's either on the left or the right, depending on which way this system is showing us. Dave Trifiletti is the SVP of North American qual here at Sago.  

I've had the pleasure of working for him for a number of years and he is a real expert in the field of all things qualitative and of course whenever I have questions I call him. So, when we get to the Q&A session, definitely engage him. He'll have lots of good answers and insights.  

And myself, I am Rob Berger. I’m the EVP of global quantitative here at Sago. I’ve been with the company itself for three years and in the industry for over 30 years. I’ve spent a lot of time in the area of respondent engagement, panel quality, sample quality and also a stint involved with the Sample Con Association in our industry. 

This whole topic of respondent engagement and quality is one that we spend a lot of time focused on here at Sago and myself personally.  

Just to give a quick background from the quantitative side of Sago, Dave will talk about qual when he gets to his section. Our vantage point in quant is we're a global research services company.  

My area of the company is focused on quant. We've been doing this for almost six decades. We're part of a group that has over 5,000 employees. In terms of quantitative, we are, and this may be for some of the attendees on this call, a bit of a surprise, we're one of the largest quantitative suppliers.  

Sago, through its background and sort of its history has been more known for qualitative. But through a series of acquisitions and growth for the business itself, we've are in the top five quantitative sample providers today.  

We have almost 400 employees on the team, project managers, bidders, programmers, and of course all the various account executives and such here as quantitative experts. We really pride ourselves on what we're doing. We do deliver extensively in our industry leading agile platform, Methodify, with over 3 million global quantitative respondents at our fingertips, in the panels that we operate and own here at Sago. And some scale around our program, it ran over 11,000 projects last year. I don't know the number offhand for 2024, but it is similar or higher. Almost 2 million of those projects are through our programmatic capabilities and over 10 million were surveys. 

When we talk about respondent engagement and how to engage them, we are literally working by the second, you can see a fun stat there. About 20 surveys are completed by Sago every minute. That's 24/7, 365 days a year.  

We see it all B2C, B2B, health care, health care practitioners, patients, etc. on the quantitative side. If you have a question about the best ways to design, these are coming from that depth of exposure that we all have, and all our employees have in seeing what works and what doesn't work. 

Having been around the quantitative, sample industry since we were doing surveys using Excel and not knowing where respondents actually were coming from, it's been sort of felt that the responsibility of the quality of insights gathered was purely the sample or the panel company's responsibility. That is definitely heavily the case even today.  

But more and more there's a shift. It's something we are pushing; it's the industry's pushing. I hear from clients now, that the survey design, the actual instrument or what you're implementing to collect the insights, is a critical piece in gathering those insights in an appropriate way.  

We can do all the various security checks and use technology and AI to do our best to get you a respondent who is who they say they are. Be it online, that is the main method we do here at Sago from a quantitative perspective.  

But the actual instrument that's provided to the respondent has a dramatic effect on that. It covers a number of different key areas that we can work with you or that you can use from this presentation to just think about as you're designing the survey because ultimately that is what will be presented to the respondents. That's what they will fill out. That's the data you'll get back. All of these can have a very positive or negative effect on what you're doing.  

This is something again, since the dawn of time or at least the dawn of time in market research, the length of survey, it is critical.  

I saw a post today where someone said, ‘How can you even talk about data quality on any survey over 15 minutes?’ It was pretty dramatic because you can do longer surveys. There are ways to implement them and execute longer surveys that can be appropriate.  

Also, if you're testing videos or content, those would create longer lengths of interviews but keeping the actual survey instrument content to less than 15 minutes is generally an accepted and preferred length. 

I have been involved in research over my career that shows that actually more like 12 minutes is really where you start seeing a dramatic acceleration in fraud. Not fraud because respondents are trying to cheat. 

It is just that respondents get tired. So they're filling it out in a way that looks like fraud, or it looks like incorrect or inappropriate responses or open ends that are not very articulate. 

Once you hit 12 minutes, it does dramatically accelerate the issues that come up in terms of survey quality. If you can keep it to 12 or under 10, really even five minutes, it really can allow your respondents to be engaged, stay focused and not get distracted. 

Everybody today has got 18 messages popping up on their phone. Apps and e-mails, phone ringing, kids and all these distractions that if you can't focus on anything for more than five minutes, imagine trying to do a survey for 20, 25 or 45 minutes. Which is still, unfortunately, at times a reality in our industry.  

Other areas to focus on is prioritizing the most important questions at the beginning. 

If you are going to go longer, make sure you collect the important questions at the beginning. I know that Dave will talk about screeners on the qualitative side, but even on the quantitative side it's becoming more of an issue, more in the last six months. Particularly in the health care area where long screeners are becoming more of a challenge.  

Again, if you're getting a physician on a survey and you ask them 15 questions and then disqualify them, that is a really poor experience which will create future problems for our industry.  

So, really try to capture their attention. Get them at the beginning. Get them when they are more alert. Trying to keep the overall length short is really critical. 

Number two is tailoring the survey to your audience, really optimizing it so it's simple. It considers the needs of the respondent. 

Again, we do work in consumer, business-to-business, physician industries and all these surveys are different. Think about who you're interacting with. You might be able to get a longer survey with a respondent talking about types of content they consume over media, but if you have a physician, again, they're very busy. There are also higher incentives usually for those things.  

So, you have to consider the audience you have. It's B2B, you're not going to ask the CEO for 30 minutes of their time when they probably have maybe five minutes to engage with you. So think about really the business objectives of your survey and who the audience is that you're engaging.  

When you're crafting the questions, think about if you're going to do teen work or child work. You have to remember their ability to understand what you're asking can be difficult.  

We've seen surveys done with teens or even very young children through advocacy methods and the client has been unhappy with the results, but the questions they were posing were very complicated and just weren't understandable by the respondent being in that regard, a child or someone who's very young.  

Those are important things to consider and look for Simple simplicity is key. The actual questions themselves and how they're presented can have a dramatic effect depending on the audience that you're engaging.  

In terms of the actual survey questions, make strategic choices about the questions that you ask. If you're looking, and this has been talked about in terms of platforms that are out there, ways to ask the same questions in different ways. 

Long grids have been a real no-no or an area you don't want to go near for a very, very long time. Look at using carousels, look at using sliders, look at using sorting tools. Look at heat maps or highlighter type tools if you're trying to get a little bit more granular in terms of people's reaction to advertisements or any type of content.  

The survey questions themselves can have a really dramatic effect on the respondent both staying in the survey but also how they're answering. People talk about making more gamifying it that's been around for a while. I don't like to think of these as games, but again, think about the respondent, think about how they're looking at the question, think about how you would answer the question if it was your own question. We have a whole host of tools and question types here that we can apply, and we can show you that through our sandbox. Our project managers can also provide suggestions on best practices in order to do that.  

Mobile optimization, it still amazes me that we're talking about this in 2024, at this point, but it's still an issue in our industry. I talked about grids in my earlier point about survey questions, but mobile optimization still has a sort of a range or a spectrum of what that actually means.  

There are still surveys today that we see, which are “available” on a mobile, but all they are just an ugly desktop version of the survey showing up in the mobile browser. You really want to be looking at platforms, be it the ones we use or other platforms in the industry that take your questions and optimize them to the actual device that's being used to deliver the survey.  

You also do have to keep in mind there are some types of methodologies that can't be executed on a mobile. It's still at times a surprise to clients when they come and they say, ‘Oh, if this design is only going to work on a desktop, I'm going to lose half the respondents potentially.’  

Because that is still, roughly, today what the split is. It's about 50-50 now. Respondents doing surveys on desktops versus mobile.  

You are creating inherent biases. This is where the survey design can drive inaccurate results if you lose half your respondents because they would only do it on a mobile device. You're already biasing your results to a type of respondent who is on a desktop or will go to a desktop or even has a desktop to go and use.  

Even within mobile devices, take a look at the way it works with different operating systems. There are some types of online engagements that are out there or apps which don't work the same on the different operating systems. It's something I've seen more in the last three months. Some of them will work better on iOS versus Android.  

Again, there are lots of studies out there, we're not going to talk about them today, that show that respondents behaviorally are different in their backgrounds, their psychographic or attitudes are different between those who using an iOS versus Android. So just things to keep in mind.  

The ideal is to make it as widely available and as optimized as possible. If you have to lose a certain type of data collection in order to achieve that, it'll probably generate you a better outcome than trying to pigeon or hole yourself into one of those systems for people to fill the surveys through.  

Going on here. Quotas, yeah, again, I'm old, maybe not I'm old, but I go back to the days of the phone rooms and such. Quotas were heavily used. Interlocking quotas, they get very, very granular. Those can be very difficult to get the right audiences online. So, really only minimal quotas should be used to get to relative representative distributions.  

If you get too granular, you can slow down field. You can also get into a situation where you're perhaps inviting people that you don't want into your survey because it's become such a low incidence target or a quarter group that you're looking at that it does create the opportunity to attract fraudulent respondents.  

That usually happens because higher incentives have to be provided in order to attract those more difficult to reach audiences. That is something that could come back to be a challenge in terms of the insights we collect and the results that you end up reporting on.  

In terms of open ends, carefully consider the placement quantity and the wording of the open ends. Open ends actually are great for catching data quality issues to a point. 

If you have no open ends, I highly recommend having at least one open end every survey that you do implement. Even if you don't ever code it or use any of the verbatims because it is a tool that will sort of capture if there are bad actors who've entered in through the sample sources. Be it people who are looking to make a quick buck or a bot that gets in through there. 

Usually, open ends are the manner in which you can catch those or at least get an idea. If you have none, you will have no early warning system as I look at it in terms of that type of question.  

Add those in but at the same time don't have too many. If you have too many open ends, you run into the survey length issue again. You run into fatigue of the respondent where people now are entering very short non-articulate answers. Not because they are bad actors, it's just boredom, or they've been on the survey for too long. 

Too often I've seen clients say, ‘oh, I think that these are a bunch of bots,’ because the participant just said “okay,” or “that's neat,” or “I like the color.” That is more likely just boredom than a bad actor or an inappropriate respondent.  

Lastly, humanizing the survey experience. I've done this presentation or version of this with another colleague from our company. He said in one of those presentations, “Would you send this to your mother or your father or if you received the survey, would you sit and do it?” 

I think that is a very fundamental basic human element that we forget about. We use the term sample in this industry but they're real people who've joined the panel, they've agreed to be sent surveys, or to give their time. They're not getting very much in terms of incentives, they're getting something relatively nominal, but they're interested in providing their insights, but they're not interested in being abused or being asked questions that are not worded well or poorly written. 

So, really think about the human experience. Put more visuals and videos into the survey. There aren't really any restrictions in terms of what respondents can do. You can have them upload images or videos. There's a lot of really great quant/qual methodologies now that allow you to make it much more of an interesting experience instead of the standard, typical questions that go all the way through and don't really engage the respondent in any regard that would keep them focused on giving really strong insightful answers.  

I mentioned this earlier, think about the age of the respondents who you're engaging younger, older professionals, consumers. Always think about what it is they're doing and what you're trying to ask them. 

Think about the time they're effectively donating. That's the way I look at it. Everyone's time isn't cheap and everyone's busy and we're asking people to stop and drop what they're doing for a nominal incentive, to provide insights that help us drive decisions and provide quality results for your study.  

Those are the key areas in terms of quant survey design that I focus on quite a bit. At the end I'm happy to answer any questions, but I'm going to kick it over to Dave to pick it up from the qualitative design. 

Dave Trifiletti 

Thanks, Rob.  

As Rob said, I'm Dave Trifiletti. My background quickly, I started off as an army officer, but for the last 24 years I have been in marketing on the agency brand consultancy and now on the vendor partner side. 

I think I bring an interesting perspective to things having sat on all sides of the table, similar to what Rob had shared, showing the breadth of our experience. I mean we've been doing this for nearly six decades. I really love presenting with Rob not just because he's a great human but because I think it's important that we do offer qualitative and quantitative options for our clients.  

We're trying to be the best partners that we can to develop solutions for what they're trying to solve and having qualitative and quantitative experience is something that I think really adds a lot of value to our client relationships.  

With all that, you see 8,000 projects a year and the number of people that we recruit. There are very few projects that we haven't done or haven't done something that's very tangential. So, we spend a lot of time making sure that we are sharing knowledge internally and understanding the things that go well so that we can replicate those behaviors and things that haven't gone well that identify better ways to approaching it the next time we go at it. 

You're tapping into years of experience and a volume of projects, with a breadth of different things that we've trying to solve where we are in a continuous improvement type of environment.  

As we go here, we truly are seeing a lot of change, and I think everyone else is seeing this. We're not the only ones observing this, we are all observing this.  

During the COVID-19 era, we had a captive audience. In this post COVID-19 period, it is a lot harder not just to build panel and have people participate in your panel, but to participate in research studies and to actually show up.  

Something I like to say is humans are going to human. You will never get a hundred percent of people to do a hundred percent of what they say they're going to do. So, we all need to make sure that we are accounting for those human behaviors and understand that when we recruit, we have an over-recruit to account for that attrition. 

It is our expectation and our hope that every project achieves the objective that our clients are setting out to achieve, but we are doing it in a fast, efficient and communicative manner.  

One way we want to make sure that we're doing that is by ensuring that we're accounting for those human behaviors of people not always showing up.  

Going onto the middle, respondent acquisition costs is at a 300% increase. That is a staggering number.  

People are a lot more judicious of how they're spending their time to a degree. They have so many more things that they're trying to do. They have so many more distractions.  

Honestly, they want to make sure that the research that they're participating in is something that they feel good about. Something that they feel like they're going to be adding value to and enjoying to a degree.  

And so not just participating in each research project but becoming a part of a panel and volunteering essentially to be a part of not just a project, but a number of projects is something that we're competing with. Not just with other research companies, but we're competing with everything out there that's competing for people's time.  

As we're looking at the design of projects, as I kind of alluded to, people want to be a part of a research project where they feel that their opinion is actually going to matter.  

They don't want to do something where it feels like, ‘Okay, I'm just going to say some words and they'll go nowhere.’ The respondents being able to see and know that they're going to help influence the decisions of a brand is something that they feel good about.  

Are they going to be the ones responsible for making the final decision? Obviously not, but the words and experiences that they share are going to inform those actions that brands take. That's something that you can help to illuminate as you are determining exactly how you're going to be conducting your research.  

An important note as we are going through this, we are also looking for respondent feedback. We obviously look for client feedback after every project, but we're looking for respondent feedback to truly understand why they did something, why they didn't do something so that we can apply those learnings as we move forward and do more and more projects. 

Rob talked about talked about screeners more in survey length. He said if you go beyond 15 minutes for a survey in the quantum rotative realm, you're going to have drop off. One of the opportunity areas for us to be more efficient and more concise with what we're doing from a screener standpoint. 

We're seeing screeners that are averaging 27 questions. When you're asking folks to do just a screener that's taking 15 plus minutes and they're not compensated for that, you're going to have attrition. 

We've found that 12 questions are kind of a magic point. Rob had said 12 minutes for a quantitative study, 12 questions for us on a screener side of things.  

You don't want to compromise what questions you include in there, but you want to be very purposeful about what you're asking there. 

Unless it's a terminating type of question, don't put that in your screener. That's something that you can cover in the research. The purpose of a screener, on top of leveraging the information that we have, is knowing who we're reaching out to and to see if they would participate in the research. You're trying to make sure that you get the right folks that are going to be able to answer questions or talk about experiences that are relevant to what you're trying to learn. 

If it's not a terminating question, it's not as important. You can throw that toward the end, but you want to make sure you're getting folks up upfront. The last thing you want is people to not even get through your screener. 

Something else that we've found with our clients and working with screeners is if you can do bundle or combo questions, that is a great best practice.  

For example, ‘are you a female that lives in the south and is the primary grocery shopper that goes to one of these types of stores.’ Give them personas that they can relate to that allows you to screen in a way that's going to ask fewer questions but be a little bit more illuminating in terms of the type of person that you can find to participate in your research. 

As I said before, understand that we've been doing this for a number of years, and we know who's on our panel. So, leverage the experience as you're working with our project managers and recruiters. Understand that we have some of those basic demos covered and that's not something that you have to go into great depth on with your screener. Focus on the types of questions that are terminating because it's something that doesn't fit into what you're looking for.  

Going back to the beginning of the slide, as we see screeners get longer and longer, it is challenging, and it creates more time for all of us to just get research set up and that's from the vantage point of aligning on these longer screeners. But then when you get to longer screeners, you have that type of attrition where you're going to have to continue to be out there recruiting for a longer period of time.  

As we know time is money and we may get to a point where we have to charge more for those longer screeners. There's a lot of value from a financial and from just a project effectiveness standpoint to be very, very efficient with how you are putting together your screeners. 

I'm going to piggyback on what Rob had said earlier. At the end of the day, you're talking to people who are like you but they're not from the industry. You need to talk to people as if they're someone that you could relate to.  

Imagine you're an average person who's not familiar with our world. What would be the easiest way to engage you and to remove friction?  

We are all looking for ways of doing what we need to do, whether it be in our personal life, in our professional life to streamline things as much as possible. So, think about things from their vantage points, as you're looking at different ways to engage folks. 

If you're using digital methodologies, don't think about doing homework assignments where you're going to have them use PowerPoint or go to Nero or use a bunch of different platforms that they may not be familiar with. While PowerPoint and other platforms may be very familiar to us with how we conduct ourselves day in and day out with our jobs, average consumers may not have that level of knowledge of how to use those types of platforms.  

You want to make it as easy as possible. Remove as much friction as possible and allow people to participate, meet them where they are, allow us to leverage our experience.  

I'm going to keep on relying on this because in anything in our lives, we hire folks to do things for us because they have experience. I will never paint my house, I'll never do plumbing work in my house because I do not have that experience.  

We've got 60 years of experience doing millions of projects over the years. So, allow us to bring those experiences to the forefront.  

Our team is very focused on making sure that we work with you and are able to bring that experience to the forefront and tell you what we do see to be the most effective. 

Understand exactly what you're trying to learn from folks. This is what you're trying to learn from somebody. Put yourself in their shoes, what is going to be meaningful to them? What is going to be something that makes sense to someone who's not from our industry?  

Just I'll use what Rob said earlier, talk to them like they're a family member. I've got several siblings; we all do different things. They'll talk to me about their job. I have no idea what the hell they are talking about most of the time.  

The way you do this is you put it in words, and you put it in a way that relates to them.  Always think about it from the other people's perspective, UX matters.  

As I said before, get those termination questions upfront. Let's make sure that you are focusing on the questions that are really going to allow you to understand who you're inviting into the research. And recognize that questions that aren't terminating are things that you can cover off once you get into the research.  

As I had said on the previous slide, making personas and grouping questions together is a great way to not only allow you to ask fewer questions, but it's much more humanizing. It allows them to pick the persona that they can most relate to. If you turn it into something where it's something that's relatable, it makes it much easier for people to engage.  

I'm whipping through slides because I know we're low on time.  

The mom test, and maybe we talked about this already before, but putting yourself in the shoes of others. Imagining, would this be something that made sense to me? Would this be something that I knew what was being asked of me? Would I want to participate in it? Is it something that's interesting to me? Would I understand what you're trying to achieve by doing research with me?  

Think about it from their perspective. Show them that you really care. As you are doing these things have some type of a statement that talks to them about what you're trying to achieve with your brand, product or service or let that come through in the way that you ask your questions. That way they can see what you're trying to do with your research. 

I talk about human behavior a lot. At the end of the day, that's what we're trying to do with your research. To try to understand human behavior. Don't lose sight of that.  

That is what you need to be thinking about as you are setting up screeners and thinking about the way that you're trying to invite people in. Think about the human behavior side of things. That is an enormously important thing overall. We love working with people, our consumers and our business partners.  

The more that you partner with, whether it be us or another partner, treat them as partners. Talk through things transparently, have them understand what you're trying to achieve from an objective standpoint, and you'll have partners that come to the table, not vendors. I'd encourage everyone to lean into that thought process as you go and try to find the right folks for your research.  

And with that, I think we're at the Q&A portion and I only went three minutes over.