Don’t talk to me about engagement
Editor’s note: Adam S. Cook is senior market research analyst at Pilot Media Inc., Norfolk, Va.
Frequent and readily available data on ad effectiveness is relatively new, which means we are in some uncharted waters. In my role as a research analyst at a media company, I’ve worked closely with many designers, agencies, sales reps, business owners, marketing directors and others over the last few years. The company I work for has collected a lot of valuable information and I’ve learned a great deal from my experiences in sharing information with these different points of contact (good and bad). The one thing I can say is, advertising’s much bigger than one ad effectiveness number and one number will not be the answer. But I can understand the desire to find it. In this article I want to touch on this desire for one number, some factors to consider when it comes to evaluating advertising, some factors we may want to leave out, and, lastly, some potential one-number methods.
Are we asking the right questions?
We now have a number of different research tools at our disposal to help measure ad recall, “ad engagement” and response in the newspaper business. We use the services of Sweden-based Research and Analysis of Media (RAM) at our newspaper The Virginian-Pilot. These sources typically measure 15 or more factors deemed important to advertisers. Who came up with these? Who knows? The bigger question may just be, “Are we asking the right questions?” Let’s leave this for another article.
As we push closer to being able to show a potential return on investment (the Holy Grail in the media world) we may actually be losing sight of some basic tenets of advertising. When it comes down to it, perhaps no one has summarized the ad business better than Rosser Reeves. This mid-century advertising icon once said: “You know, only advertising men hold seminars and judge advertising. The public doesn’t hold seminars and judge advertising. The public either acts or it doesn’t act.”
I can dump a heap of analysis and research that tells a business why its ad is the greatest thing since sliced bread or is complete trash, but they, the advertisers, ultimately know what’s working and what isn’t. Where research and analysis can best serve the advertiser is helping to determine why ads are more or less effective.
For too many years the creative world has been rewarding design and not necessarily results, while the scientific world has been ignoring design’s impact and importance. The reality is, advertising effectiveness is both art and science. Blasphemy, right? We need to get the beret-wearing “artEEsts” and pocket protector-wearing researchers to sit at the same table to work on achieving what’s most important: results! That starts by defining the word “effective.”
There are too many factors to account for an ad’s true effectiveness and thus they may never truly be measurable. Weather, product quality, locations and pricing offers are only a few of an exhaustive list of things that can impact results. Even if we were able to factor these in, there’s one nearly unpredictable factor that can not be measured: the consumer. How is it that a consumer will make one of the biggest purchases of their lives (real estate) based on emotion or buy a car on a whim and yet still be willing to drive 30 minutes out of their way and wait an extra 10 minutes in line to save a dollar on a two-liter of Coke? Consumers’ decisions can be crazy and amazingly irrational. Even worse, when asked why or how they decided to buy, their responses appear impressively rational.
Let’s start with what ad effectiveness is not: ad engagement. My stomach churns at the mere mention of this idea. It pains me to discuss, but I must, because it’s an industry obsession. It has become more of a distraction over the last few years. Don’t get me wrong - I’m not saying it isn’t relevant; I’m just saying it shouldn’t be the focus. Ad engagement is not the end; it’s a means to an end. If we have advertisers who want to engage consumers, great, let’s engage them until their heads pop. Just don’t come running back to me three weeks from now saying, “The ad isn’t getting the results we’re looking for.” We need to better understand which elements of engagement can help advertisers achieve their true advertising objectives. But like I said, we need to look at what’s important to the advertiser. We are full circle on what I mentioned earlier: We need to help answer the questions of “why.”
Trying to find Bigfoot
Let’s get to the matter at hand. With apologies to J.R.R. Tolkien, “One number to rule them all. One number to find them. One number to bring them all and in the darkness bind them.” And yes, the idea of having one number is as fanciful as Tolkien’s Lord of the Rings. Many have been ambitious in their search to find this number and we should all applaud those efforts. Unfortunately the search can be likened to trying to find Bigfoot.
But it’s a start and it gets us moving in the right direction. The concern is, using complex and inflexible means may breed opportunity for error. The more variables you include, the further away from truth you sometimes stray. Only the pertinent variables should be included. If ad engagement is a means to an end and you have end-metric results available, why would you also include ad engagement metrics into the equation? This only gives additional weight to variables that may or may not have had any bearing on the outcome. I opted not to include these variables when devising a formula.
We have to begin by defining ad effectiveness. After meeting with well over a hundred advertisers it occurred to me that complicated methods weren’t going to cut it and there had to be a simpler way. Let’s demystify, not mystify. After all, complexity doesn’t have a strong track record. (Have you looked at your newspaper’s rate card recently?)
How do advertisers define or measure ad effectiveness? It varies, doesn’t it? Our formula should vary as well. My goal was to find a simple and flexible approach. I also contend that the one number used for indexing or comparing should actually mean something. A number that doesn’t lend itself to meaning doesn’t sit well with advertisers. When faced with presenting one number without meaning, the conversation can sometimes lead to a lot of skepticism and derail discussions on how to make improvements.
One desired outcome
Ad effectiveness is as good as your last ad, campaign or as good as those of your closest competitors. My experience has taught me that between 90-95 percent of the time advertisers are looking for one desired outcome: maximum response, either to their store, to the phones or to the Web site … PERIOD.
Ultimately, our one number will be indexed/compared to the advertiser’s history and/or its competitive set. First, we have to understand what impacts maximum response. There are myriad things, but simply put: one, the percentage of people that saw the ad, and two, the percentage that intend on acting.
Assuming that your measurement tool is similar to the one we are using, this is what we have measured and that’s where we need to start. Simply put, the “one number” starts by calculating the percentage of the potential audience that plans to act. This can be done by multiplying the advertiser’s desired outcome percentage by the percentage of those that saw the ad. This is the number, or outcome-based ad effectiveness percentage (OB-AEP), you’ll use for comparing and indexing. This secret is pretty disappointing, isn’t it? See Figure 1 for how the OB-AEP translates into a basis of comparison or an index and Figure 2 for an example.
I was intrigued with the idea of being able to show this outcome-based ad effectiveness percentage on a quadrant chart and discovered that it’s actually more interesting and complicated than you may think. As I started looking at the variations, something about the quadrant analysis didn’t make sense to me.
Here’s why looking at OB-AEP on a quadrant fails us: Quadrant analysis came from the world of academia. The business excellence model or Kim-Lord grid are two commonly recognized quadrant approaches. They were designed for strategic decision-making and not analytics. So I went down the path of trying to create my own OB-AEP quadrant. I placed ad recall percentage on the y axis and the advertiser’s desired outcome percentage along the x axis (Figure 3). Again, the desired outcome percentage can be interchangeable based on the advertiser’s primary objective.
This quadrant should help us define what is or isn’t effective, as labeled in Figure 3. As you will notice, I have question marks in each quadrant. Your definition would be as good as mine. We know the upper-right is good and lower-left is bad; everything else is somewhere in the middle, right? I started plotting example ads into different quadrants to see if this quadrant analysis had merits.
As I said earlier, advertisers are looking for the one desired outcome, so we need to look at the OB-AEP or overall audience outcome. Using this basic formula we are able to calculate OB-AEPs for the example ads plotted (Figure 4).
You can now see why the quadrant fails us. In all four cases, where ads were equally effective, they actually fell into three separate quadrants. The OB-AEP doesn’t fit the quadrant mentality. In fact, its distribution, based on equal effectiveness, looks more like Figure 5. It goes against a lot of things I was taught, but using the quadrant method we are unable to visually demonstrate ad effectiveness. I’ll probably want to file a restraining order against my college marketing principles professor after saying this, but for our purposes of demonstrating ad effectiveness, it’s time to ditch the archaic quadrant model.
We need to use this basic chart found in Figure 5. As you can start to see, the distribution pattern is actually very complicated for such a simple formula. Hang onto your berets and pocket protectors; it’s more complex than you think. The chart we are looking at isn’t actually two-dimensional; it’s three (I’m thinking I just lost half the readers at this point).
Okay, for the few who haven’t given up on me, get this: When looking at the distribution of the OB-AEPs it creates what is called a hyperbolic paraboloid (now I’m probably down to two readers). What is a hyperbolic paraboloid and how can this be? See Figure 6 for a visual representation or visit my blog (www.fightinanalyst.com) for more details.
The better question is, how else can we explain the odd progression found in Figure 6? We’ve been so used to looking at two-dimensional charts, we forgot that we live in a three-dimensional world. The factor of these two metrics (recall and the desired outcome) gives us the one number, OB-AEP, or our third axis: z!
What can we learn from this? Increasing an ad’s effectiveness isn’t necessarily a straight line, as you can tell by looking at the resulting hyperbolic paraboloid (Figure 6). If done right, changes made to ads can yield exponential returns in effectiveness.
Do one of three things
With this three-dimensional model in mind, advertisers can do one of three things to improve effectiveness.
Increase ad recall percentage and maintain response percentages:
- straight-line improvement, which limits maximum ad effectiveness potential
- leans toward science
- usually requires additional monetary investment
Increase response percentages and maintain ad recall percentages:
- straight-line improvement, which limits maximum ad effectiveness potential
- leans toward art/design/message
- usually requires an increase in time and resources
Increase recall and response percentages:
- exponential improvement, which removes all limits to maximum ad effectiveness potential
- incorporates art and science
- can get you the greatest return, but requires money, time and resources
See Figure 7 for a graphic example of improving effectiveness. Understanding what helps improve ad recall and desired outcomes/response (ad engagement or the means) can help us help advertisers get better results (the end).
The beauty of this formula and model is not just its simplicity but its flexibility. We can now use this OB-AEP to compare ads against the advertiser’s competitive set and use their desired outcome (whether it be potential store traffic, actual purchase intent, intent to look for more information, intent to visit the Web site, an increase positive feelings about the company, etc.).
Of the two readers I have left, I’m guessing at least one is wondering: “What about the 5 to 10 percent that don’t just have the one desired outcome? Is a formula available for them?” Yes, but this added layer of complexity removes our ability to succinctly define the one number. I try to stay clear of using this method and advise others against using it. But for those few instances where the advertiser insists on including multiple variables like brand perception, likeability and traffic-driving directives, we do have an index formula available. (Equal or custom weights can be applied; e.g., 70 percent of my ad was for traffic and 30 percent was for brand.) Visit fightinanalyst.com for formulas and examples of equal and custom-weight multivariable outcome-based ad effectiveness indices.
When and where is it best to use any of these ad effectiveness indices?
- For internal communications, as a warning system or a simple gauge. This can be a quick-and-dirty way to summarize how effective an ad was. Hint: don’t let this number be the focal point of sales presentations. Too much time spent on discussing math and methodologies takes the conversation further away from discussing how we can help improve results or answering the why.
- With advertisers who understand your research tool for measuring ad effectiveness, understand the fundamentals of advertising, are ready for a more sophisticated way of gauging effectiveness or are looking for the same utility we are on an internal basis.
A way to calculate it
Is this the “one number?” No, but it’s a way to calculate it and hopefully a sound and flexible way. Outside factors and consumer irrationalities are still cumbersome areas that can impact results to varying degrees, at different times, from business to business and from market to market.
Variables that may or may not have impacted the end results are not included, but do need to be understood to help direct our decisions for helping advertisers increase effectiveness (size, color, brand awareness, benefits of products/service, etc.).
One last bit of advice. Don’t let ad effectiveness indices become a crutch. Research is not absolute, but it can give us better direction. The public is the true judge.