Panel integrity and data quality

Editor’s note: The author of this article wishes to remain anonymous. This article refers to the author’s experiences in a market where they spent the past 10 years working as an expatriate. 

I have worked at traditional research agencies and within insights roles in advertising agencies. Although I no longer work directly in the industry, I still conduct and use research, analytics and insights. 

I love conducting research, understanding people, thinking and analyzing. However, I am fed up with the industry. I am going to risk offending my colleagues by telling the unspoken truth.

It is time for market researchers to hold other researchers – and themselves – accountable for panel integrity and data quality. 

Fake respondents and fake data

One of the biggest complaints among researchers when I worked in Market X was that the market was rife with fake respondents and fake data. 

Some fieldwork suppliers (especially for offline data collection) ignore the agreement to send you the data in batches. They ignore requests via e-mail, phone call and in-person visits. Instead, they send everything in one go on, near or after the deadline. This makes it nearly impossible to reject the data despite quality issues. 

For example, I once received a data set of n=1,000 consisting of 90% fake data. The answers were random. I’ve learned to give the research team a buffer between fieldwork and reporting, but this is a luxury researchers cannot always afford. 

Instead of fixing the fieldwork provider’s problem, research agencies in Market X ask the research team to monitor the fieldwork so that they do not make mistakes. In the past, I have wasted more than 50% of my time doing the fieldwork provider’s job to ensure the respondents and data were genuine. 

Panel and data integrity

I recently asked a few panel companies to explain how they validate their panel members and data, as well as their quality assurance process. While most suppliers gave me a direct answer, Company X engaged in a concerted effort of corporate gaslighting tactics.

First, they avoided answering my question about data integrity by twisting it to a requirement for 100% representativeness of the data. Company X diverted from the topic, stating that market research could never guarantee absolutes and instead should only focus on the middle of the bell curves. They also said that there would be variances and outliers within the bell curves that could never be eliminated.

Then, after I told them this was not my question and that I would never expect suppliers to guarantee 100% representativeness, Company X twisted what I said about outliers. I said that we should check the entire distribution of the variables, instead of only focusing on the middle (means). Outliers may not necessarily be problems but can be phenomena to be investigated (e.g., emerging trends). But of course, it could be simply known social phenomenon like Warren Buffet’s income, uncleaned data (data entry mistakes) or errors and uncertainties.

Finally, Company X subtly discredited my views while making it look like they were trying to help me. 

Company X also challenged my client’s professionalism for planning to use the data in a certain way. 

I want all market researchers to know that when a client has a need, it is our job as service providers to think of a solution. Just because a researcher has not heard of or tried a use case does not mean it is wrong, or that it cannot or should not be done. If everyone lacks imagination, the world will never progress. 

Company X’s behavior was an attempt to hide the fact that they could not promise the integrity of their panels and data. It is up to us as researchers to hold our industry accountable for panel integrity and data quality.