Editor’s note: Robert Passikoff is founder and president of Brand Keys, a New York research firm.
No matter how beautiful the strategy, you should occasionally look at the results.
-Winston Churchill
While the quote above comes from Winston Churchill, the sentiment is something I believe in. Many companies talk about predictive metrics and how everything from MRI-driven neuroscience brain flares, keeping tabs on tweets, scoring social shares and listing likes are predictive. That being the case, I think that it is worth reminding readers of what predictive is supposed to mean. For that I turn to Merriam Webster’s definition of predict: “to declare or indicate in advance; foretell on the basis of observation, experience or scientific reason.” If your research isn’t doing any of the above, calling it predictive is not only a misuse of language but a misapplication of research.
We rely on engagement metrics to predict what’s going to happen in the real marketplace. Real engagement is best defined as the consequences of any brand marketing or communication effort that improves the brand’s standing in the eyes of its customers vis a vis its competitors. This kind of engagement correlates with consumer behavior, so axiomatically if more consumers behave positively toward your brand, your sales and profits should improve.
Our approach was built using archetypal psychology as its foundation, building on the work of the father of archetypes, Carl Jung. This approach is a theoretical framework that measures underlying emotions that drive attitudinal and behavior dynamics, specifically as they relate to advertising and marketing. The approach fuses emotional and rational values (attributes, benefits, values, platform preferences, behavior measures, sales metrics, etc.) that govern brand engagement through a combination of indirect psychological inquiry and higher-order statistical analysis, including factor, regression and cause path analysis. This quantitative technique can be applied to understand innate, unconscious processes involved in decision-making and to measure levels of engagement engendered by marketing and communication efforts. The technique is used to understand those innate, unconscious processes involved in decision-making and to measure levels of engagement engendered by marketing and communication efforts.
But to be truly predictive you have to measure what consumers think, which is always different than what they say they think. If the latter were predictive, all the cheapest branded products and services would have the largest shares and profits. But they don’t. Compounding that, there are two problems that need to be addressed when it comes to research results actually predicting consumer behavior and market trends.
The first is that while the brand should be the beneficiary of every marketing exercise, predictive success is often related to methods used to deliver brand marketing: platforms (TV, online or sponsorships), context (program, Web page or gaming), message (advertising or communication outreach) and experience (store or event). These are more easily measured in re-engagement with this method than actual brand engagement but are far less likely to be predictive of positive consumer behavior toward the brand. Remember the brand, the raison d’être for all this work? All too often, success regarding outreach is deemed predictive. More often than not, it isn’t but I will leave those nuances and that discussion for another time.
There are claims that traditional survey-based brand research and even copy testing might work well at predicting the impact on sales of different strategies and campaigns but that can’t be the case. Face validity proves that! If the predictions are so good, why are marketers unable to accurately predict brand trajectories, especially when suffering massive losses of customers and sales? Why don’t research results match actual market results? Why does that predictive research allow brands to fail? And if your research isn’t predictive, why don’t marketers go out and get research that is?
Which brings me to the second, more devastatingly simple problem: It turns out that making predictions tends to be a far more popular pastime than actually checking on the accuracy of predictions. Few researchers or marketers actually put their predictions to the test. They don’t bother to look back and see if their predictions were accurate.
Actually checking back isn't as difficult as it seems for all those prognosticators out there. Each year, we examine predictions – made on the record in columns and blogs – six to 12 months later to review for accuracy. There are some simple, basic measures that can be applied to virtually any category or brand. These are readily available on the Internet or in a newspaper. We recommend you consider the following:
- quarterly, annual or year-over-year sales;
- posted profits;
- profit margins;
- share increases;
- customer acquisition/retention numbers;
- price ratios; and
- major category and category-trend shifts.
Fair is fair, after all. If you’re willing to put your research findings out there and make predictions, you ought to take an annual look over your shoulder. This is done not to see if you’re being followed but to reaffirm that you really do know where categories and brands are going.
If nothing else, I hope this will inspire marketers and researchers to do a little digging on those predictive metrics out there. It’s important to review what was promised against what actually transpired. There’s a big difference between truly predictive metrics, interesting data and the next shiny thing. The distinction is worth noting because you’re betting you brand’s future on it.