How generative AI can affect the insights industry and research results
Editor’s note: David Hunt is the 2024 president elect at INFORMS, an association for the decision and data sciences. He is also vice president at Oliver Wyman.
Generative artificial intelligence – the use of computers to generate words, images, voice and videos indistinguishable from human-created content – is transforming every industry. Technology advances and new use cases appear daily with no slowdown in sight as gen AI is estimated to add $30 trillion annually to the world economy by 2030. Much of the impact is positive, including the ability to quickly synthesize massive quantities of data and text, freeing humans to spend their time on more creative tasks where we still excel. But there are also serious risks that must be mitigated.
We are living in the information age, but that information is becoming less and less trustworthy due to the widespread use of gen AI by malicious actors. For information rich industries, like marketing research, this has profound impacts on how work is done and on the integrity of results. Of particular concern are deepfakes, defined by the Department of Homeland Security as “an emergent type of threat falling under the greater and more pervasive umbrella of synthetic media … to create believable, realistic videos, pictures, audio and text of events which never happened.” In the wrong hands or through negligence, gen AI and deepfakes can compromise data integrity, erode consumer trust and impact brand reputation.
Data integrity is compromised
Managing biased, outdated and poor-quality data is not a new problem for market researchers. Also not new are efforts by malicious actors to provide misleading information to boost their cause and damage their competitors. These risks can be mitigated through multi-source validation, review of trusted original content and hard work by well-informed staff.
The difference with gen AI and deepfakes, however, is that misleading information can be produced with higher quality, in greater quantity and by more people. In other words, false information is more believable and plentiful, thus making it harder to distinguish fact from fiction. Gen AI can be used to fabricate responses in surveys, create fictitious focus group participants and manipulate interview recordings.
Deepfake videos are particularly troubling due to people’s natural inclination to believe what they see. False videos of politicians, business leaders and celebrities quickly spread and can be effective at influencing viewpoints as the quality of these videos improve.
All forms of synthetic media (videos, sound, text) can lead to skewed market research results and potentially flawed business strategies designed around compromised data. Furthermore, the speed with which a deepfake can spread across the internet can quickly alter prior views and render market research efforts obsolete.
A recently released product by the creators of ChatGPT is OpenAI’s Sora which generates realistic, high-quality videos from text. An example provided by OpenAI uses the prompt “historical footage of California during the gold rush” to create an impressive view from what appears to be a drone flying over a California town by a small river with people on horseback. Clearly there were not drones or video footage in 1848, however the potential to generate high-quality alternative versions of historical events is now in the hands of everyone. To OpenAI’s credit there is a watermark (the OpenAI logo) on the image and OpenAI provides some guardrails by preventing the use of real humans (like celebrities) in the videos. However, others will follow by providing similar technology and there will likely be malicious actors able to circumvent the guardrails.
Is AI eroding consumer trust in market research?
Media attention on the biases and privacy concerns around gen AI are eroding consumer trust in market research. As more misuse stories arise (both real and fake) consumers grow more skeptical of the authenticity of research findings and the ethical standards of organizations conducting studies. The story of Cambridge Analytica collecting thousands of pieces of information on individuals and using that in attempts to manipulate voting behavior is just one example that can lead to decreased participation rates in research activities. Individuals fear their input can be manipulated or misused and therefore either refuse to participate or are reluctant to provide genuine opinions. The loss of trust not only hampers the ability of researchers to collect data, but also can bias outcomes and damage the reputation of the market research industry.
The impact of gen AI on brand reputation
Malicious actors’ intent on damaging the reputation of a brand, company or individual have an increasingly sophisticated set of tools at their disposal. Creating a deepfake video of a CEO making highly political statements or racially insensitive remarks can lead to a swift backlash and potential boycotts if the situation is not quickly identified and controlled. Some deepfakes are extreme to the point of not being believable, but others can be extremely sophisticated.
A company can damage its own credibility and trustworthiness in the public eye if caught using gen AI to misrepresent its brand. An example is Sports Illustrated which used gen AI to write sports articles and create images of fake reporters, which is not illegal, but by failing to make this information public, Sports Illustrated has damaged its brand.
Another concern with gen AI is the use (knowing or unknowing) of proprietary content. If a market research firm uses gen AI, which draws from proprietary information without proper citation, then the firm can potentially face litigation. The New York Times sued OpenAI and Microsoft for copyright infringement for illegally using millions of NYT articles to train ChatGPT. Other lawsuits are being pursued for copyright infringement. Market research firms need to be vigilant that any content used from gen AI is not copyright protected.
Mitigating the impact of fake news and fake content
A consumer trust survey by KPMG asked what the primary concerns were regarding gen AI, and the highest response at 67% was fake news and fake content. This exceeded the concerns around privacy, bias and job loss. Mitigating harmful impacts of fake synthetic media and gen AI-produced content on market research is a high-priority topic that requires coordinated action by the tech companies, regulators and the end users.
Tech companies practicing responsible AI methods are deploying enhanced data verification by embedding metadata that allows identification of content generated using their products. Tech companies have also established “red teams” that scan outputs searching for misuses of their products. Twenty tech companies (including Microsoft, Google and Meta) recently pledged to help prevent AI election interference through eight steps they will implement in 2024. The steps include developing new tools to distinguish AI-generated images from authentic content and more transparency with the public regarding notable developments.
Regulatory bodies struggle to keep pace with the fast-moving world of big tech and face the added problem that regulatory views differ around the world. There has been some progress, with the European Union adopting GDPR to protect data privacy and Canada adopting rules for the types of decisions AI is allowed to make. However, there has been little progress on dealing with intentionally misleading synthetic media other than to hold hearings and pressure tech providers to self-regulate.
The best and last line of defense is an educated market research community. Having a basic understanding of the current technology can help prevent flawed market research based on fake data and, on the flip side, not overreact and discard useful information for fear that everything is fake. There should be an investment in advanced detection technologies to better establish when content is AI generated. Finally, training in the recognition and proper use of gen AI-produced synthetic media content should be mandatory for all employees.