Lessons Learned from Two Years of Generative AI in Market Research 

Editor's note: This article is an automated speech-to-text transcription, edited lightly for clarity.   

Quantilope sponsored a session during the January 30, 2025, Virtual Sessions – AI and Innovation series. The session focused on the lessons the organization learned about gen AI in the last two years.     

The presenters from quantilope, Bea Capestany, Ph.D., senior global director, solutions consulting and Jannik Meyners, Ph.D., senior director of data science, laid out several lessons they learned while developing Quinn, quantilope’s AI copilot. Capestany and Meyners also demonstrated the benefits of using AI in conjunction with researcher expertise.

Session transcript

Joe Rydholm 

Hi everybody and welcome to our session, “Lessons Learned from Two Years of Generative AI in Market Research.” 

I’m Quirk’s editor Joe Rydholm and before we get started let’s quickly go over the ways you can participate in today’s discussion. You can use the chat tab to interact with other attendees during the session and you can use the Q&A tab to submit questions for the presenters during the session. They will answer as many questions as we have time for during the Q&A portion.  

Our session today is presented by Quantilope. Jannik, take it away!

Jannik Meyners 

Thanks for the introduction.  

Hi everyone, and welcome. I'm Jannik, joined today by my colleague Bea.  

In this talk we would like to take you on an exciting journey and look back at two years of generative AI in market research. Over the past years, we've seen everything from sky high expectations to practical realities and challenges.  

Today we will share key lessons from this journey, including our own experience integrating AI at quantilope, and how you can apply these insights to your own AI or research initiatives.  

For those not familiar with quantilope, quantilope is an end-to-end consumer intelligence platform. We are very proud that we were recently ranked the number one research technology by GreenBook. Our end-to-end research platform offers the largest suite of automated advanced research methods, innovative tracking technology and AI. 

Before we dive into today's topic, let us quickly introduce ourselves. 

I'm joined today by my colleague Bea, who leads our solutions consulting team at quantilope. She's our expert in how to turn AI insights into practice.  

As for myself, I'm Jannik and I lead quantilope activities around advanced research methods, machine learning and AI, including the continuous development of our AI copilot, Quinn.  

My background is rooted in marketing science where I received my Ph.D. I've spent most of my career at the intersection of data science and marketing science working across academia, consulting and tech.  

My passion has always been making advanced analytics widely and easily available regardless of experience levels. This is also what we are pursuing with our research platform at quantilope.  

So, talking about AI, there's obviously a lot of buzz around AI right now and it's easy to get burned out by all the AI content nowadays. But before we dive into that bus, let's take a step back first.  

While we are talking about two years of generative AI, it's important to note that AI has actually been part of market research long before ChatGPT. Most of us have been using it for quite some time already, sometimes even without noticing that it's AI. 

To provide a few examples, our industry has been leveraging AI in several key areas for years.  

First in advanced data analysis using AI or machine learning to process big amounts of unstructured data or automating complex methods that traditionally require deep statistical expertise.  

Second for qualitative analysis, think processing open-ends, auto coding and analyzing video service for content sentiment or emotions.  

Third, AI has been crucial in analyzing third party data like social media content or customer reviews.  

But let's be honest, despite the fact that we've all been using AI in market research for quite some time, and the examples we just shared are of course not an exhaustive list of AI applications, but there's many more.  

ChatGPTs launch in late 2022 did shake things up dramatically. Why is that?  

Because suddenly AI wasn't just this specialized nut thing that required Python coding skills, but it became accessible to everyone leading to widespread adoption and opening our eyes to seemingly endless possibilities in our daily work. This evolution brought opportunities, but also new expectations, some realistic, some perhaps a bit unrealistic and too optimistic.  

Speaking of expectations, Bea, you had kind of a front row seat watching how this AI bus specifically impacted the insights industry. So, what did you observe?  

Bea Capestany 

Yeah, thanks, Jannik.  

I mean, looking back at the past few years and the webinars and conferences that I've been to, we've seen a really fascinating shift in the conversation.  

Early conversations were dominated by broad sweeping statements about transformation and revolution with headlines promising everything from unlocking unprecedented insights to redefining consumer behavior.  

That makes sense because our industry was grappling with fundamental questions like, is AI actually going to replace us or is it totally going to transform how we do research?  

But here's what's been really interesting. We are now seeing a more nuanced and practical evolution in how we talk about AI. Instead of those sweeping declarations about AI revolutionizing everything, we're having more grounded conversations about specific applications and use cases.  

So, we're asking questions like, how can AI enhance our existing research expertise? Or where does it fit into our current methodology? How can we use it to make our insights generation more effective?  

But I do still think that we're missing something important in a lot of these discussions. There's not really enough focus right now on how AI can pragmatically enhance our existing research expertise and methods. 

What we want to explore with you today is how AI has evolved from being viewed as this hypothetical revolution to becoming something much more valuable, a powerful complement to human expertise and insights generation.  

To summarize this, what we've seen over the past two years is that we're well beyond viewing AI as just an efficiency tool. We're in what I would call a ‘true augmentation phase,’ where insights professional and AI work together as a powerful team.  

Now think about what this really means. Sure, AI can handle the heavy lifting of data processing and routine tasks like initial report drafting, but it's not just about replacement, it's about enhancement. The most successful teams that we're seeing are those who understand this partnership dynamic. When AI takes care of these time-consuming tasks, it frees us to focus on what humans do best, critical thinking, strategic interpretation and adding the context that transforms data into actionable insights.  

Yes, there's a learning curve with these tools, but the payoff is clear. Insights professionals who embrace this partnership are not being replaced by AI, they're being empowered by it. They're delivering deeper insights because they can focus on the strategic work that truly moves the business forward.  

Let me pass it back to you, Jannik, to start us off with our first learning as we've personally gone through this journey and evolution ourselves.

Jannik Meyners  

Yeah, so let's dive into our first key learning from the past two years, and that's already quite an important one. 

Common generative AI tools are not built for advanced data analysis. While there was initially a lot of excitement about using tools like ChatGPT for literally everything, including complex analysis, we and probably many of you as well quickly discovered some important differences in what gen AI tools are good at and what not so much and where there are shortcomings, especially for market research.  

Now this doesn't mean these tools aren't valuable for market research, quite the opposite. They excel in several areas across the whole insights workflow.  

Let's look at a few examples of how gen AI can support along a typical research journey.  

They're already super helpful at the very beginning of the journey, be it for research design, helping refine broad goals into specific questions, or to spare the best approaches or methods for my use case.  

They can be also great at generating an initial draft for my questionnaire. Suggesting alternative wordings for specific questions or helping brainstorm attributes for a specific category, for example.  

Yes, they can also assist with parts of data analysis, especially for digging into more qualitative data like open-ended texts or for initial data exploration and pattern finding. They can also help to interpret more complex methods that I might not be too familiar with. 

Last but not least, they can be quite powerful for generating insights summaries and getting initial ideas for business implications. 

As you can see, gen AI tools can be of great help for many things. I'm sure that most of you have already used it for one or another, but, and this is crucial, there are some fundamental limitations that we need to understand.  

Here's where we need to be careful with commonly available gen AI tools because they have some significant limitations. They're just not reliable when it comes to repeatable data analysis.  

You might ask the same question twice and get completely different answers and none of them might be actually correct.  

They also struggle with accurate statistical testing and problem solving, especially when it comes to more complex methodologies or data. They simply can't handle sophisticated data analysis and algorithms the way specialized tools or models can.  

So, we are still dealing with language models here. Math and algorithms are not what they were built for. What they were trained with and what the model architecture is optimized for, and very important for us in market research, they can't independently generate reliable insights without a proper, energetic foundation. 

 So, all of this actually led to the realization and led us to develop a better way of thinking about how to use these tools. 

What we've seen is that I'm now using LLMs for an AI, but LLMs works best when they're used as a layer on top of sound data analysis and not as a replacement for it.  

Think about it like building a house. You need a solid foundation of quality checked data and validated analytical methods. On top of that, then you have your process analysis that comes out of this foundation. Only then do you bring in the LLMS to help generate insights and actions based on these findings. 

We've listed a few of the most known LLMs here and the most used ones, but there's of course many more providers. You can pick your favorite model.  

By following this approach, we can actually get the best of both worlds, like combining robust analytics with gen AI's ability to communicate and also to synthesize information has really graded.  

This is exactly the approach we are also taking with our AI developments at quantilope. This also creates the trust and the reliability that is needed by our clients, our industry and by ourselves to actually adopt AI in our daily work.  

With that being said, this gets us to our next important learning. Bea, handing it back to you.

Bea Capestany 

Yes, thank you.  

Our second important learning, as Jannik said, from the past two years has been that AI is not a simple on off switch. It takes time and it is a journey that requires patience and strategic thinking.  

In fact, we have some numbers that tell a really interesting story.  

Organizational advocacy for AI has moved from 39% in 2023 to 50% in the past year, but 62% of researchers completely or mostly believe that AI will increase their productivity. So, this gap between belief and adoption suggests that there's a clear path forward if we're thoughtful about how to get there. 

What does the path forward look like for us?  

So, what I'm about to show you is what quantilope has developed as the AI insights curve. And something unique here is that teams can start off at two different points. You can start with some apprehension or with optimism. 

Think about it as two different doors that lead to the same room. Now if you're starting here at the bottom with apprehension, you are not alone. Many, many, many, many, many teams begin here viewing AI as a threat to their expertise rather than an opportunity. The key is focusing on practical education and tangible success stories.  

Now on the flip side, you may start off from a place of inflated optimism where there's a rush to adopt everything AI without a clear strategy. If you're here, if you're starting from optimism, we would recommend setting clear expectations and building on your existing capabilities rather than trying to transform everything overnight.  

Now the middle of the curve is where the real work starts to happen in the realism phase. Here you will face technical challenges, but you will also have breakthrough moments. Success here is not about perfect implementation, it's about identifying high value use cases and investing in the right training.  

As you move through the curve into strategic adoption, you will find that sweet spot where AI and human expertise complement each other. This is where many teams start building AI centers of excellence and developing comprehensive strategies that align with their business objectives. 

Finally, of course, the ultimate goal would be to reach maturity where AI and human work is seamlessly integrated. But this does not happen overnight. I can't stress that enough, and you need to get there sustainably.  

The most successful teams who get here will focus on augmentation over replacement. They'll start with clear high value use cases and maintain human expertise as the guiding force.  

Now again, this is definitely not a race. So, whether you're a global insights team with a lot of resources or you're just starting to explore AI's potential, success is going to come from moving through these stages thoughtfully building both capabilities and confidence along the way.  

Alright, Jannik, back to you for our last learning.  

Jannik Meyners 

Thanks Bea. What you're saying about gradual adoption is not only super important, but it also really connects to our third key learning. AI works best as collaborator and not as a substitute.  

This might sound obvious now, after what we already shared, but this wasn't always so clear for many people in our industry, nor for us when we started to design, implement, and develop our own AI applications.  

Let me explain what we learned from that and why it is so crucial in the way we built our AI.  

When we first started designing and developing our own AI features and implementation, we had this idea that we could or should automate everything with just a single click. Be it anti questionnaires or complete reports, the AI will do everything approach. But we quickly learned three important things on the way.  

First, there's no desire for AI to do it all. 

When we talked to our users, they were crystal clear about this. They don't want to hand over the entire research process to AI, but rather the tedious or time-consuming tasks. They want to remain in full control of the overall process and apply their expertise.  

It's not about fear of AI replacing them, but it's rather about maintaining their own standards and their domain expertise using AI to help them be more efficient, not for taking over important tasks. 

This goes hand in hand with the second point because AI can’t read your mind. I guess we're all quite happy about this, but it's also crucial to understand and important for how to implement AI properly and make sure that the adoption works.  

So, we found that trying to make AI guess or assume what researchers want often leads to frustration and inefficiency. Instead, we discovered and learned that it's much more effective when researchers actively guide the AI. So, tell it specifically what they need and iterate on the results.  

Think of it as working with a very, very capable junior colleague. The more clearly you communicate and instruct what you need, the better the output. 

Third, good AI implementation requires human expertise to work effectively, like being pretty well connected with the other learnings. The best results come when we combine AI's computational power and efficiency with human research expertise, like the researchers knowledge of the business context, understanding of the research objectives and ability to validate and interpret findings.  

These are all crucial elements that AI can easily replace. Instead, AI should enhance or augment these capabilities making researchers more efficient and enabling them to focus on the more strategic aspects. I'll say on the more fun or creative aspects of their work.  

This is what we found. That true efficiency comes through this tailored collaboration, like having a conversation and iterating together rather than just pushing a button and hoping for the perfect dashboard to appear. Which is rather unlikely that it's the perfect dashboard or the dashboard is actually what I wanted.  

This is why we focus on making our AI features more interactive and more collaborative, like always keeping the researchers in the driver's seat. And this is also what we've seen. So how our clients work and want to work with AI and how we build our AI copilot, Quinn, and how we can facilitate the adoption of AI by making these smart design decisions.  

We've put together all the learnings, which we just shared, that we could observe in the past years.  

Before we get into seeing Quinn in real-life action, let's see how Quinn looks.  

You can see here a sneak peek of our AI copilot, Quinn. 

Quinn is fully integrated into quantilope's consumer intelligence platform. Quinn can educate, empower and execute tasks throughout the entire advanced research process. But like I said, before we get into the demo, I would like to use the time a bit to show how we solved this technically. 

Like I said, this is how we actually put all these learnings into technical practice. And what we do here is use an example of generating insights from a user entered research question. What you see here is our approach to combining AI-based insights, automation with validated research methods, essentially putting into action what we've seen. 

We start with an integrated chat interface that enables iterative and collaborative work because as we learned, that's how researchers want to work with AI. This is what's most efficient at the core.  

We have two types of AI agents working together. One agent for orchestrating all incoming requests and tasks and there's more capabilities.  

The one we focus on here in our example and our strategic insights agent, that is called the orchestration agent in this example. I know this is technical, but it's really fun, believe me.  

What's crucial is how we've structured everything around them. The workflow of generating insights moves through several automated steps in which different AI agents actually emulate the cognitive insights process, be it hypothesis generation, insights, validation, and many, many more.  

Then bringing this into the context of the overall research objective. This entire system is underpinned by two essential foundations.  

First, a lot of research expertise, own data that we collect in profound quality and validation checks. Second, our advanced methods with scientifically validated models and algorithms, so related to our first learning.  

This ensures that we're not just relying on AI alone but actually combining it with proven research methodologies and the output editable charts and insight summaries that researchers can review, modify, and use with confidence. Our users always stay in full control.  

This design reflects our earlier learnings like AI as a collaborator, not as a substitute, building a solid analytical foundation, keeping the research in control of the process and actually building the trust and reliability for fostering adoption.  

So, Bea, after this technical part, that I probably enjoyed most, now I can't wait to see how it actually looks in action.

Bea Capestany 

Yes, actually, when I first understood how complex it was, it just blew my mind. But luckily users don't have to actually know any of that in order to use Quinn. 

I am actually going to look to the side here, and I'm going to navigate into the quantilope platform to briefly show you how AI has been integrated into the research workflow within our platform. Really what we want to show you today is how it's acting as your collaborator, no matter where you need support. 

I am inside of the quantilope platform, and I am highlighting a study that we did to understand food trends. We actually used a MaxDiff approach to understand consumer preferences for milk labels. Now this is actually the output of our MaxDiff.  

Let's say I'm looking at this, maybe I'm more of a junior researcher or someone who has never heard of or has never used a MaxDiff before. Well, luckily, I can actually just click on Quinn here and Quinn will begin to help you interpret individual charts. So, you can think of Quinn in this case as a knowledgeable colleague. 

You can see that Quinn has generated a chart title here. If I zoom in just a little bit, you can see that it's even starting to help you interpret what the MaxDiff results are. 

Now if I zoom back out, I can guarantee that about half of you on this call are probably looking at this chart and saying, ‘Hey, I could interpret a MaxDiff in my sleep.’ Fine. Totally get it. I hear you. I dream about MaxDiff myself.  

So, let's go ahead and get a little bit fancier.  

Rather than having to comb through all of the data and all of the analyses, let's have Quinn give us an overview of the findings.  

Let me actually go ahead and pull up the Quinn chat, and I'm going to ask Quinn to give me the main findings of this study.  

In just a few seconds, you can already start to see that Quinn isn't going to just be summarizing or regurgitating data for you. It's actually going to draw from the research expertise that we have built into it and using what we already know about the MaxDiff method.  

You can actually see here that it has given me a little blurb about what this analysis is looking at.  

So, it's looking at factors influencing purchasing decisions in the milk and milk alternatives market, cow milk leads, but plant-based alternatives are gaining popularity, sustainability influence and consumer loyalty. It's giving you a few little summarizations here.  

Actually we can go ahead and go to the report group, and we can start to see that it's given us this summary and it's actually pulled several charts for us already.  

All right, so this is pretty good, right?  

It's pulling some interesting data for you here, but I feel like I know what you're thinking. Again, maybe you're thinking, ‘this is cool, but obviously my team is way more sophisticated. They want to go deeper.’ And let me guess, you guys have stakeholders that are the type to ask you about every single subgroup imaginable. I think that this happens to every single researcher out there.  

So, let's actually preempt some of those questions that we might get from our stakeholders and save a little bit of time. 

Let me go ahead and open up Quinn and I am asking if there's any differences in males versus females in their preferences for milk labels.  

So, again, here, Quinn is going to think about this. It's working on our request, and in just a moment you'll see that it is going to help us understand these subgroup splits. You'll notice in just a minute that it's going to point us to some differences.  

Here we have it. Females prefer USD, organic and sustainably sourced labels. Let's actually go ahead and look at the charts that it created for us here. 

It created this nice MaxDiff chart split by males and females. You can actually see the live stat testing as well. So, it's giving you a little bit of an overview of what these differences are, but we still have a lot of room to apply our expertise in this interpretation. We always want to be using it together with our own expertise.  

Now the last thing I want to show you is that of course we all as researchers want to be prepared for anything and everything. I won't blame you that you didn't look through every single chart or every single subgroup split in your data. Maybe we didn't explore all of the data as much as we'd liked, or we were overly focused on proving out the hypothesis that our bosses wanted to see.  

Now what we can do is get a little bit more nuanced and we can actually go ahead and ask Quinn, ‘hey, are there any contradictory or surprising results across any of this study?’ And again, Quinn is going to think about this for you. You'll see in just a second that it's going to start helping you understand and identify unexpected trends or interesting contradictions.  

Here, for example, you can see that, younger consumers value sustainability more averse to vegan and vegetarian, which maybe might have been a little bit counterintuitive.  Millennials are more open to trying plant-based milk.  

So, we can already start to explore what some of these contradictions might look like. And again, it's helping you look through some of these charts.  

Now, of course, there is so much more to do here and that we can do with Quinn, but hopefully this was enough of a teaser to get you all really excited about the evolution and use of AI in our lovely insights industry. 

With that, I can say thank you to all of you for being here, and we'll open it up for questions.