Editor's note: Tom Ewing is content director at BrainJuicer Labs, London.
This article is based on a keynote speech that was originally given at the Australian Market and Social Research Society (AMSRS) annual conference in Sydney last year. When the AMSRS kindly invited me to give a keynote, they had one condition: it had to be upbeat. Keynotes at market research events often have a rather depressing air – constantly stressing the research industry’s need to change, to become more like consultants or technicians or entrepreneurs or else face extinction.
Market research is far from perfect, of course. But those gloomy prescriptions have one thing in common: They assume that research can’t change or is slow to do so. Except it can. In the 15 years I’ve been in the industry, research has changed enormously and that change is ongoing. My AMSRS keynote was a celebration of that change and of an industry with a marvelous capacity to change and adapt.
Even optimistic overviews of market research tend to talk in terms of its future – the promised land that awaits the industry if we just embrace big data/become trusted advisors/get into wearables (insert your own urgent action here). What I wanted to describe wasn’t the future of research but its present – the day-to-day reality of forward-thinking research companies. As Ian Dury put it in his song “Reasons To Be Cheerful”: “Yes, yes my dear/perhaps next year/or maybe even now.” Why wait?
I identified six main changes I saw happening in research – compared to 1999, when I started my career. A couple of the changes have been wider trends that research has found itself forced to react to but most of them are the research industry seizing its opportunities. Research today is regularly using new technology or scientific understanding to improve its practices and invigorate its assumptions.
The six changes cover all elements of research: its approach to measurement, its assumptions about behavior, its desired outcomes, its style of deliverables, its treatment of participants and the client goals it helps to reach.
Measurement: from soft metrics to hard behavior
For a long time, the research industry’s main model has been proprietary metrics – black-box scores created by smooshing together bits of attitudinal data into a single score, then building a norm out of it. Proprietary metrics offer an illusion of the concrete – you can see how well what you’re testing compares to decades of history.
These metrics were a necessity in a world where we didn’t have access to data about outcomes and real behavior. But increasingly brands do have behavioral data: from transactions through searches through locations, much of the information discussed under the umbrella of big data is behavioral. This doesn’t end the usefulness of proprietary metrics based on attitude but it puts them into particular niches – explaining the “why” behind the “what,” for instance. It’s getting easier to compare predictive metrics to outcomes and it’s getting easier to use behavioral data as the basis for decision-making and experimentation.
Attitudinal metrics can still be successful – look at Net Promoter Score (NPS), an extremely simple attitudinal metric whose take-up has been so dramatic that you could almost forget it doesn’t measure actual recommendations and conversions. NPS has the virtue of simplicity but it’s also essentially an open-source metric: Even if you’re not using the official NPS metric, you can customize and insert a measure like it very simply indeed, then analyze it against whichever outcomes are most useful. This agility isn’t shared by most proprietary research models, to put it mildly. It’s likely in the future that simple, agile metrics like this will be more important to research than new black boxes.
Behavior: from claims to context
The rise of behavioral data is a trend research has had to respond to, rather than one of its own making, but it is responding. The “world without questions” is becoming more of a reality. When we have more access to behavioral data, research’s role becomes less about establishing claimed behavior (past, present or future) and more about putting that behavior into context so clients can make informed decisions about it.
This is where the recent industry interest in behavioral economics and decision science fits in – it establishes the crucial psychological context of the decisions people make, without which they are difficult to understand and influence.
Interest in decision science has been criticized or belittled in some quarters – people point out that most of the basic psychological principles behind it have long been known and that the foundational behavioral economics work researchers have drawn on is itself decades old.
These criticisms are misguided. When a psychological principle was discovered is hardly relevant, compared to when it is effectively used. And while we have known for a long time that, for instance, granular behavioral data is highly unreliable, this has mostly been accepted as something we can’t do much about: in aggregate, or over time, the errors are less important. But the rise of behavioral data holds out the possibility, at least, of looking at both individual and networked behavior more closely and it’s grounds for optimism that researchers are taking that opportunity.
What does studying decision context actually involve? Not asking questions. The massive U&A survey, studying claimed behavior in obsessive detail, is a dinosaur method. But contextual research can involve quantitative measurement – often getting at implied context by measuring or manipulating decision times. Implicit association testing – now widely used by researchers – works by comparing the tiny differences in response times to different stimuli. Research Through Gaming’s Betty Adamou collects response-time paradata in her “research games.” And companies are using time limits to prevent considered, overly rational thought in research respondents – BrainJuicer has had success doing this in pack testing, for instance.
But looking at decision context is often a highly qualitative process and may not involve participant research at all. Literature reviews of existing psychology, sociology and economics work are becoming an important part of the research toolkit: much better to spend a few days on desk research than a few weeks simply confirming what is already well-known. “Field trips” to study decision environments are another tool and increasingly research agencies are hiring people from design backgrounds who can analyze user experience.
Even when participants are directly involved, it’s often via mobile ethnography – we get a sense of the decision context by seeing and understanding the world around it and what other people are doing.
Outcomes: from hypotheses to experiments
An increased focus on the context of behavior gives researchers better opportunities to change it. Research has always been about using data to generate hypotheses and recommendations for its clients. Increasingly, though, innovative firms are going a step further: We can test those hypotheses ourselves.
The endpoint of a research study doesn’t have to be a survey or qualitative work – it can be (and often is) a real field experiment in a store or a program of split-testing to measure the impact of online changes.
For example, last year BrainJuicer worked with DrinkAware to help understand and tackle binge-drinking in British pubs. (DrinkAware is a charitable organization funded by the British drinks industry which aims to encourage responsible drinking.) Our specific brief was to increase water consumption – British bars and pubs have by law to offer free water but this is a rare choice next to alcoholic drinks. DrinkAware’s hypothesis was that if people would drink more water they would slow their rate of drinking alcohol. Our job was to get them to do that.
We looked at the way water was served and the typical behaviors around it and came up with two very simple activations. One was making water more visible – bottling tap water at the start of the night and placing it in a prominent basket on the bar. The other was use of a poster, simply reading “FREE WATER AT THE BAR” and with a close-up picture of a man drinking from a glass of water. The poster was designed to activate mirror neurons, parts of the brain related to copied and learned behavior. Mirror neurons activate in similar ways whether you are performing an activity or only seeing it and we hypothesised that triggering them with the poster would further increase water consumption.
To measure the effectiveness of the interventions we constructed an experimental framework based on a control period (with no interventions) and then alternating on-week and off-weeks where combinations of the two interventions were in place. We measured water consumption levels by a combination of counts of the free bottles, manual counts of water drinkers at fixed times, till receipts (for paid water, soft and alcoholic drinks) and other behavioral metrics.
What we found is that the interventions did indeed increase water consumption dramatically – a 320 percent increase in water consumption occasions. Unfortunately, the original hypothesis was not supported, as this level of increase did nothing to slow the rate of alcohol consumption.
The study is an example, though, of how a test-and-learn mentality is spreading through the research industry. The interventions could simply have been left as recommendations – the fact that we tried them and, by trying them, learned that increased water consumption didn’t solve the underlying problems was far more valuable.
Deliverables: from findings to prototypes
As well as going the extra mile in testing hypotheses, researchers are putting a lot more effort into making deliverables more exciting and more likely to be acted on. Like everyone else, research clients – and their internal clients – make decisions led by the emotionally-driven System 1. If you produce deliverables that aren’t just a list of findings but that make people feel something, the chances of them being shared and gaining a life within the client organization are far higher. The end of a project should only be the start of a piece of research’s life. But after bringing them into the world, we researchers used to abandon our “babies” and leave them exposed to the sometimes hostile environment of the client business.
These days we are better than ever at presenting research in ways that give it a chance for survival. Some aspects of this have been much-discussed in research circles already. Infographics, for instance, are now a well-understood tool for research and marketing communications – even if researchers aren’t always especially good at designing them! Dashboards – presenting information in close to real time – have proven to be something of a research speciality, with the likes of Face Research building social media monitoring tools that compete with well-funded specialists like Radian6.
One area that is less discussed in this move from storytelling to story-doing is the world of prototypes. Rather than simply recommending what to do, researchers are often building it. After all, if your job is to test and optimize something, the logical output of that job is an improved version of the something. So you can find research agencies producing edits of ads, model packs, prototype app and Web page layouts and plans of retail concepts. These aren’t meant to replace any of the finished work creative and design agencies do. Instead, these post-research prototypes are mock-ups designed to give research findings a fighting chance of capturing the imagination and emotion of the client.
Participants: from exploitation to collaboration
One of the most talked-about shifts in research practice has been a change in how we think about and talk about respondents. In the bad old days – when I began my career – our relationship with participants was nothing to be proud of. U&A surveys would routinely run to several hours and highly repetitive tasks – from conjoint to attribute grids – were the norm. Once such surveys moved online, people who took the – frankly understandable – decision to speed through them were kicked out. But beyond that, not much consideration was given to how these techniques might affect the data we got from participants.
This has changed. The drive for more humane and entertaining treatment of participants, spearheaded by the Research Liberation Front in the late 2000s and carried over to mainstream research by the likes of Jon Puleston and Annie Pettit, is transforming the look and feel of quantitative research for participants. Whether this will ultimately reverse the fall in response rates is unknown but it’s heartening that the work is being led by researchers who passionately believe in rigor and high standards. It de-fangs the common objection to more enjoyable surveys – what if this affects the results? – by daring to say that if it does, maybe the results were wrong in the first place.
Meanwhile in qualitative research, we’ve seen a similar shift towards more interesting research environments – this time driven by new methodologies. The rise of MROCs and mobile ethnography is often talked about in terms of how well the methods let researchers get close to the authentic voice of the customer. Left out is the way in which accessing this authentic voice is more interesting and rewarding for the people taking part. Just as in the quantitative world, research is putting itself on a more collaborative footing with participants. Since – even with a move towards contextual and behavioral research – there will always be a need for some direct consumer involvement, this is definitely a good thing.
Goals: from information to emotion
Finally, the more we understand about how the mind works, the more the aim of research shifts. Researchers used to be two-way information brokers. They collected data and told clients what their customers thought. But they were also in the business of helping clients communicate that information to those customers, by way of brand- and communications-testing tools that took an explicitly rational view of communications and persuasion.
This information-centric view of communication has regularly been attacked by marketing scientists and psychologists but recently it’s been coming under more sustained and effective fire. From behavior change – where lecturing people is an obvious but poor solution – to advertising – where work by Les Binet and Peter Field (The Long and the Short Of It, 2013) proves the efficacy of emotional communications, it’s becoming clear that feelings trump reasons and seduction beats persuasion.
This changes the goal of research – ways of accessing and understanding emotion are becoming more and more important. Some use existing, highly validated systems of human emotion, like Paul Ekman’s seven basic emotions. Others tout the benefits of new technologies, like facial coding. But more than ever, the research industry is finding out at a primal level how people feel about brands, ads and ideas – and then helping clients make things that make people feel good.
Completely unrecognizable
The overall impact of these changes is huge. The work I’m doing now at BrainJuicer, and the work a lot of other agencies are doing, is completely unrecognizable from the work I was doing in 1999. And if I had to sum up the changes in one sentence, it would be that research has got a lot less self-centered over the last 15 or so years.
It used to be that the result was king. Consumers existed to produce data that could be turned into results. You delivered those results to your client and that was generally the end of them. If consumer behavior was messy or difficult to understand, then you smoothed off the edges. It was a system that worked. But it was selfish. Its understanding of behavior stopped at the survey question. Its understanding of business stopped at the debrief. And it seems to me this myopia really has begun to change.
We are more open to observing behavior and rediscovering its psychological roots. We are more open to working with messy, large-scale data sets – often passively collected. We are more open to the idea that results are the beginning of the story, not the end of one. We are more open to making, not just telling. We are more open to the real world, both of our participants’ and our clients’.
The future will still have plenty of challenges. But we’re in a good place to meet them. Research now is more psychologically attuned, more technologically capable, more creative and simply more fun than at any time since I joined the industry. And that’s a reason to be cheerful.