Discovering the right ways to integrate gen AI into the research process
Editor’s note: Jack Miles is the senior research director at Northstar/HarrisX, London.
We face psychological barriers in getting colleagues and clients to use generative AI tools as part of the research process. But we can overcome these and the tricky crossover period when we’re using generative AI and traditional ways of working. How we do this requires combining behavioral science and organizational psychology.
You’re likely one of the insight professionals currently asking themselves, “How do I best use generative AI?” That’s not surprising as we were barely searching the term “generative A” online until March 2023. And now it’s the topic we’re all talking about.
The psychological challenge of implementing generative AI research tools
Talking about generative AI is easy though. What’s harder is implementing generative AI into our research process in way that colleagues and clients buy into.
In theory, Homo economicus’ thinking says that generative AI is fast and cheap. This should be enough to ensure researchers use it and clients buy into it. But using generative AI in research means breaking old habits and forming new ones.
Recent research by Mark Pollard at Strategy Sweathead suggests that making these changes is hard. Mark's research says that most client-side marketers are dabbling with generative AI without having it as a formal part of their work. This shows that using generative AI on the client side isn't ingrained or habitual. But why? Luckily, behavioral science can help explain why.
Old research habits die hard
Research from Wendy Wood shows that the longer we’ve had a habit, the harder it is to break. I’ve been writing questionnaires without prompt engineering for 16 years. Businesses have relied on human-generated (not synthetic) data since research began in the 1930s. This means that breaking such hardened habits is psychologically hard.
The research region of rejection
We've used many of today's research practices for a long time. This means that generative AI risks falling into what Jonah Berger calls the “region of rejection.” This is when new thinking falls so far outside of someone’s current beliefs that they aren’t persuaded by it.
Imagine this. For 20 years a researcher has been writing notes while watching interviews. Then annotating them and mapping out their reporting framework. Now inform that person that a computer can do all of this. It’s likely they’ll think that this new way of working is far removed from their current behavior. This will then cause them to reject it.
Consequences outweigh rewards
We’re psychologically hard-wired to prefer keeping to the status quo and to avoid risks. This is more prevalent in B2B situations. Rory Sutherland says that in B2B, risk aversion is higher. The consequences for a wrong B2B decision are more severe than the reward for a correct one. This means researchers that are in the “region of rejection” are also likely to be “risk averse.”
Too easy to believe
We think products are good quality when we can see the amount of effort put into them. (The labour illusion). Part of generative AI’s allure is that using it needs little effort. But generative AI tools are being released so quickly, it signals quality issues. Data quality is research buyers’ top priority. So, this is a clear barrier to AI becoming a normalized part of research.
Naysayers see what they want to see
Generative AI has existed long enough for everyone to know the general criticisms of it. It’s too general. It doesn’t know enough about context. It draws on dated information.
This may be the case but it means that when we show colleagues or clients how generative AI tools can help research, to quote psychologist Thomas Gilovich, they “see what they expect and conclude what they expect.” The result? People don’t see past generative AI’s current shortcoming and realize its benefits.
Enough about the problems. What about solutions?
There are several options to help us overcome these psychological barriers. But they aren’t all appropriate. For example, uncertain rewards can increase the motivation to change behavior. But this isn’t likely to negate the B2B world’s risk aversion. Phillipa Lally’s work suggests that we need to repeatedly try to motivate risk-averse colleges and clients but that risks driving reactance.
Generative AI can clearly improve how we do research. But in many cases, how it can do so is such an evolution compared to how we do research now (and have done for years). This means the best way to add generative AI to research is to phase it in over time in small steps. Or as behavioral scientists call it, “chunking.”
Chunking’s potential and problems
Breaking changes into small steps is better than trying to make one big change. Katy Milkman’s work on chunking shows this. It leads to more effective behavior change. This is great, but it means there’s a crossover period where we’re doing research by combining generative AI with old ways of working. And, as history has shown, combining legacy and emerging systems is complex.
In the early 1900’s cars started to use Detroit’s roads. Roads which were being used by pedestrians and horses. The roads had added untrained drivers. In 1917, this caused 7,171 accidents and 168 deaths.
Society also faced similar problems when telephones first launched. As phones became more popular, we needed to change the way we wired them. This led to the wiring systems on phones being different to the systems in people’s houses. But the transition issues didn’t end there. Telecommunications firms solved this problem, causing a surge in the popularity of phones. So much so, that the networks couldn’t handle traffic levels. This prompted the invention of peak call charges which aimed to put people off making non-essential calls.
So, what are some of the equivalent transition problems we face as we try to make generative AI part of the research fabric? Some include:
- How do we use chatbot interviewers within standard surveys?
- How do we ideate using Chat GPT and people in the same room?
- How do we triangulate synthetic and human-generated data?
- How do we sell all the above into clients and their stakeholders?
Navigating the crossover period
There’s no silver bullet for answering such questions but there are some techniques you can use to chunk generative AI into your research process to overcome the psychological barriers you’ll face in doing so.
Chunk-up your research process
Line up your research process and all the tasks it entails from start to finish, then align generative AI tools against each part. Based on this, find out where the most improvements can be made based on:
- How you can improve your work.
- The quality of the tools available.
Create credible chunks
Integrating generative Al tools successfully will depend on the chunks you create. It’ll also depend on the order in which you work through them. Start with the easiest wins that have the most visible gains first.
This will give you, your colleagues and your clients confidence. It’ll show that generative AI tools are valuable to research and can help overcome their fears one-by-one.
Understand the tools you’re trying
Before you try and integrate a generative AI tool into your work, understand how it works. What data sources was it trained on? How does it interact with your current platforms/ways of working? What kind of behavior change will result from it?
Knowing this will create transparency and it'll help overcome the uncertainty of new ways of working. People familiar with generative AI tools are less likely to reject them.
Think operationally and practically
Before you trial any generative AI tool, learn if and how they can work with digital research tools. This is especially true for data collection tools. Some will be compatible, others partially and some not at all. This will let you understand if you’re trying a generative AI tool’s full capability or just what’s possible when you merge it with an existing platform.
Experimentation and triangulation
The acid test for any generative AI tool is to measure how it compares to the status quo. But key to this is making sure that you know what you’re comparing.
For synthetic data, you may compare how its results compare to survey data or for writing tools you may compare the Flesch-Kincaid scores compared to reports researchers have written. The right experiment design needs the right comparison points. These are critical to overcoming risk aversion.
Find and use your generative AI advocates
One of the best ways to overcome psychological barriers is using the messenger effect. This is when we judge info based on its source. This means it's important to make senior colleagues and clients aware of the advances you've made with using generative AI tools in your work and then getting them to tell others about it.
Equally, as your colleagues start to use generative AI tools, get them to make it known that they’re doing so. This will create social proof. We're influenced by others, compelling them to act within norms.
Let’s not make generative AI the next electric car or virtual reality
People are slow to adopt big behavior changes. Look no further than the first electric car (designed in 1888). It took over a century for them to become mainstream. The first virtual reality headset was designed in 1968, 50 years later we’re still tentatively using the technology.
Let’s not make the same mistakes with generative AI and research. Yes, the technology is important but more important is understanding how to get colleagues and clients to use it.