Two methods, same goal
Editor’s note: Owen Hannay is CEO of Slingshot LLC, a Dallas ad agency.
It has always seemed reasonable that there are really only a couple of things that would dramatically impact whether a specific piece of creative would have its desired impact on consumers or not, which at its most simple is three parts: Did we get their attention and did they look at what we had to say? Did they understand what we were saying? Did what we say have meaning and motivate them to action?
Over the years I, like many of you, have spent a great deal of our clients’ money on a myriad of pre-testing tools for television and print concepts including focus groups, dial readings or emotion-based measures…almost all of which (and certainly the most widely used of which), while useful, have a single, very important commonality: forced exposure.
Because of that forced exposure, while they do a very good job of measuring a consumer’s interest in the content, his or her ability to recall the messaging, and gauge the likeability of the creative, it has always occurred to me that if your audience did not turn around or look up from whatever (or in today’s era of multi- or hyper-tasking often the many things) they were doing while they were watching television, then they never got the chance to be engaged with the creative, to appreciate the messaging, recall it or purchase the product or service. In short, it doesn’t effectively answer question No. 1 above; at its least complex, that is the “made you look” factor. It has also historically been a very difficult measure to come by.
In this article I will discuss one solution that, while very simple and unrefined at this point, has the potential to set the stage for a new era in creative testing, one that will lead to better creative executions and better results for advertisers. Certainly, it is an opportunity to think differently about how we go about testing creative concepts overall.
New positioning
For one of our clients, Dave & Buster’s, a Dallas-based operator of upscale restaurant/entertainment complexes, we had a history of using focus groups and post-production copy testing as the primary methodology for evaluating creative concepts for television. As we embarked on the development of a spot designed to bring a new brand positioning to life, we were looking for a method to measure the “made you look” factor in addition to our usual focus group methodology.
After considering a number of approaches, we decided upon a two-pronged strategy. The first half of the testing would involve creating animatics (very rough black-and-white mockups with limited motion and a scratch voiceover) of five concepts and showing them to focus groups in three markets to gauge customer interest in the concepts as well as their recall of the key messaging. The second, more unusual, approach would involve taking those same animatics and running them online, as standard video advertising on a variety of Web sites. The primary thesis with this approach is that unless you believe that consumer behavior is somehow fundamentally different online than it is offline, any difference in performance of one animatic over the other would indicate a potential preference for, and intrusiveness of, that message and or creative unit over the others tested and would be useful in predicting that same customer reaction to that creative and messaging in a traditional media environment.
We knew that we could test not only different creative executions but also specific online content channels, by using a broad-reach network. So we approached ValueClick Media, which represents over 13,500 Web sites on which we could purchase ads, which they group into specific areas or “channels” similar to television or radio content (movieinsider.com for example would be grouped in Movies & Television while lyrics.com would be in Music & Radio). We purchased inventory on four channels to see if context played any role in consumers’ preference for one creative approach versus another. We also had the ability to query customers who chose to interact with the creative online by serving them a small survey to find out what drove them to click on the creative, or watch it again, what caught their attention and what they found compelling about the messaging - many of the same measures we were asking for in the focus groups.
Now, from a pure research perspective there were (and are) several things to consider. The animatics were extremely rough and might or might not communicate effectively online (one of the expected results was that there would be no difference between the creative executions because they “looked” the same, as our historical experience with online creative had indicated that the color and look of the creative dramatically impacted its performance). We also had no basis for comparison and no norms to compare these results with anything else.
That said, however, we recognized that we could not overlook an opportunity to measure something that is very difficult to get at using traditional research methodologies.
We had five concepts that we took to animatic stage: “Blind Date,” “Food and Fun,” “Distraction,” “Grapevine” and “Summer Games.” For all five, in addition to bringing them to focus groups in three cities/areas (Dallas; Austin, Texas; and Orange County, Calif.) they ran individually online on four separate channels within ValueClick Media’s network so that it was possible to measure what differences, if any, the context made in the consumers response to the creative.
The first result - which was an indication that, directionally, the approach had some merit - is that the two spots that respondents in the focus groups preferred - “Blind Date” and “Summer Games” - also scored better in the online portion of the testing, albeit in reverse order. “Summer Games” was No. 1 in the focus groups and “Blind Date” was No. 1 in the quantitative testing. In both methodologies those two did significantly better than the others tested.
The second interesting finding was that, while three of the media channels had action rates that were almost identical, the Movies & Television channel had an action rate that was 50 percent higher.
Course was correct
The similarity in result between the traditional focus group methodology and the online research provided an additional data point indicating that the course that we were choosing was correct not just from a messaging feedback standpoint but also from a general “interest” or “made you look” perspective.
Further, the difference in the response of consumers to the Movies & Television channel indicated that the fun nature of Dave & Buster’s might play better in that environment. While we considered the possibility that the higher result on that channel was due only to the media, we have not been able to find evidence that the sites in the Movies & Television channel had higher click-through rates during other campaigns that we had previously run, relative to other channels. That possibility notwithstanding, it would seem to indicate that a placement within television programming associated with or in movies and broad television interest programming might have more impact than sports or another channel.
Finally, the responses to the online questionnaires reinforced what we had learned in the focus groups, which is that customers found “Summer Games” to be more impactful and engaging for the same reasons that they articulated in the focus groups, further confirming that they received the creative in the same way.
Historical highs
In the end, though the research results pointed to a toss-up between the two spots, we went with “Summer Games” because we had done focus groups many times before and the online quantitative was a new methodology.
The finished spot scored well above industry norms and hit historical highs compared to other Dave & Buster’s spots during testing. In addition, sales in the supported periods were up significantly in a category that was flat to down slightly. While all of this cannot be attributed to the advertising, Dave & Buster’s CEO Steve King was quoted as saying, “We are thrilled with our results for the first half of the year. Effectively communicating our unique combination of food, drink and games continues to translate into strong sales performance across the country.
A fuller story
We are continuing to recommend that this combined approach to testing to be used for our television clients, and we have recommended using the same basic technique for print and outdoor as well as broadcast. While neither methodology tells the whole story, combining traditional focus group research with online testing tells a fuller story, and it is my belief that understanding and incorporating the “made you look” factor will lead directly to more aggressive creative solutions, ones that not only resonate with audiences, but those that they are really interested in.
I think it is important to note generally that digitally-centric partners have advantages that go beyond just their ability to create and develop in multiple platforms. While as an industry we have begun to think in terms of online and offline activities supporting each other, ultimately many of the lessons learned on the digital playing field can inform how we operate on the traditional media side. While it is clear that traditional advertising is critically important now and into the future, we can and should be blowing up old-fashioned ways of looking at problems when there are newer solutions or additional perspectives readily available by looking to the digital world’s data and metrics.
Someday it will be possible to measure a consumer’s interaction with all television advertising as the television delivery systems gain more sophisticated metrics. At that point clearly this kind of quantitative engagement testing can and will occur. Until that time, however, the kind of testing reported on in this article can provide useful data that improves overall effectiveness of traditional advertising.