Editor’s note: Olivier Tilleuil is founder and CEO, EyeSee Research, New York. This is an edited version of a post that originally appeared under the title, “Why contradicting behavioral and traditional KPIs can be good for research.”
In statistics, correlation is any type of relationship between two variables. A positive correlation means that if one variable goes up, the other variable goes up – a negative correlation signals the opposite.
A strong correlation in your behavioral KPIs for a single ad means that if many people noticed your ad, it was more likely to have high attention and emotional reach. On the other hand, your ad might perform well in a survey that measures the conscious attitudes of the respondents – they really liked the ad, found it to fit the brand and could remember the brand it belongs to after.
But let’s say that you compare the behavioral and conscious KPIs and find that the correlation is low. The ad did great regarding subconscious variables but did not perform well on the survey, and vice versa.
There are three possible conclusions:
- the survey is not relevant;
- the behavioral research is not relevant; or
- both are valid and significantly increase the predictive power.
Conventional and behavioral methods can have conflicting results
If behavioral and conventional methods have a low correlation, from a statistical point of view, this means that they measure different things – and sometimes (not always), they can have conflicting results.
Behavioral KPIs show the actual measurement of how the ad is perceived in the appropriate context. It relies on the underlying mechanisms of our perception – a different set of prompts and cues that help us notice things. That’s why ads need to attract attention first to have a chance of relaying a message to its intended audience.
But surveys lets us take our time and evaluate the ad consciously, giving us an account of the ad’s strong and weak points as well as its potential to be noticed.
Uncorrelated data: a complete picture
What do I mean when I say that both sets of KPIs are relevant? That they are useful for the ultimate goal of all research – predicting effectiveness.
If you are conducting research and all variables in a group are 100% correlated, that means you only need to have one point of data to predict all of the others – essentially, the different variables you are testing do not add value.
To understand and predict what will happen, having uncorrelated variables like this is a blessing in disguise. The accompanying measurements will paint a more complete picture of your study and bring about better predictive value (or a higher R2) – assuming that both variables are important.
For example, if your survey results show that the likeability for the tested ad is high, then the correlated variables (e.g., brand fit, channel fit) might get you an accurate and nuanced opinion about the ad. Each new question will not change the conclusion about which ad is the best – they will all score similarly high. Instead, if you introduce an entirely new uncorrelated dimension – e.g., is your ad seen in the social media environment – you get a much more valuable info that might influence your decision. If your ad is not seen, it will not create an impact. You need to have both high likeability and high visibility to create an impact.
Internal stakeholders and reporting
Is this too complex to explain to internal stakeholders? Can we just skip it?
Doing this would be extremely wrong. Many studies show that combining behavioral and conventional measurements increase predictive power by at least 40%. Not measuring the behavioral side is the same as ignoring essential data. Not obtaining the data does not change that fact – and it will arguably be worse, as you do not know just how bad the stimuli might perform on that particular KPI.
Behavioral and conscious variables are correlated within each group, but they don’t necessarily correlate with each other. They measure different variables that complement each other and provide a big picture of your ad or product, enabling you to better predict its effectiveness.