High-touch vs. high-tech
Editor's note: Chuck Young is CEO of Ameritest/CY Research, Albuquerque, N.M.
There are three reasons why most advertisers don't bother to pre-test print ads. The first is simple economics. The second is a function of time management in the creative development process. And the third reason is an attitudinal barrier - agency skepticism of pre-testing research that reflects the anxieties and doubts of creative people, such as the widely-held belief that research on creative work homogenizes their product and leads to bland, "vanilla" advertising.
Why are clients more likely to spend the money to test a television commercial than they are a print ad for the same brand? The risk associated with running an ineffective television commercial can be counted in the millions of dollars. To manage that risk by spending a few thousand on testing commercials, therefore, is eminently sensible. In contrast, the risk associated with running an ineffective print ad is an order of magnitude less, involving perhaps a few hundred thousand dollars at most. Yet today most quantitative print tests cost about the same amount of money as a television commercial test. As a result, the premium for the "insurance" provided by ad research is much higher for a print test than it is for a television commercial test. In other words, the cost/value relationship of the research to the decision-making process is out of whack.
The second reason advertisers don't test print ads is time. The lead times for submitting an ad to a national magazine can be quite long, usually several months in advance of the issue date. Planning the production schedule for the print advertisement, therefore, involves coordinating a great many pre-production steps that must come together simultaneously within a fairly narrow window before the deadline. Unlike television, time for a research step involving several weeks of data collection and analysis is usually not built into the print ad development calendar.
The Internet beckons advertising researchers with the promise of a cheaper and faster way of putting advertising - as research stimulus - in front of the eyeballs of consumers. This new data-collection channel seems to provide a technical response to the first two reasons clients don't test ads. And while, if you want to test a television commercial, you might have to wait a little bit longer for solutions to be worked out with regard to the bandwidth issue, download times are not an issue with print ads. Moreover, on a more philosophical note, computer screens are, at least in one sense, a natural environment for testing print ads because computer screens are, after all, designed to be read.
There is, however, an obvious difference between an ad seen on a computer screen and the same ad seen in a magazine. Magazines are meant to be handled, held, bent, folded - in a word, touched. What happens to the validity of the research when the tactile experience of reading an ad in the context of a physical magazine is transported to cyberspace?
Validating a measure of breakthrough creative
Creatives are correct when they say to us researchers, "You can always measure something, but are you measuring the right thing?" The issue of validating measures of print effectiveness is a difficult one and is not restricted to Internet measurement.
In developing a print testing methodology, one of our concerns was making sure that our measurement system had face validity in the eyes of the creative "end-users" of our system. We asked ourselves, "Can we take a quantitative measure of the attention-getting performance of print executions and produce results that replicate the judgment of seasoned creative directors as to whether or not the advertising has breakthrough power?" Creatives, after all, tend to define creative excellence in terms of the judgment of their peers, rather than by research numbers. Consequently, we decided to find out how print ads that had won major creative awards would fare in our print-testing system.
For that reason we performed the following validation experiment. We tested 54 print ads in our system: 37 print advertisements that had won major creative awards (e.g., winning first or second place medals in the One Show, The Directors Show, etc., during the preceding 12 months) and another 17 "regular" print advertisements from similar product categories that appeared in a national publication but had not, to our knowledge, won any creative awards.
Each of the ads was tested among a sample of 25 adult consumers in a regular "live" interviewing environment. The interview procedure began with respondents looking at paper booklets containing a set of 10 print executions. Each booklet's ads were in rotated order. After looking at the booklet, respondents were asked our standard set of questions.
The measure of attention-getting power which we use in our firm's print-testing system is parallel to the one we use for testing television commercials - a measure which has been validated to in-market sales for several of our packaged-goods clients. The measure is produced by showing the test ad in a clutter environment of competing ads, described above, and asking the respondent the question, "Which of the ads you just saw did you find interesting?" (It is important to note that we are not performing a memory test here; rather we are simply ascertaining whether or not an ad is interesting to its target consumer.)
The results of our experiment are shown in Exhibit 1. We found, on average, that award-winning ads had twice the attention-getting power as regular ads.
Said another way, ads that professional creatives themselves judged to be clever, entertaining, fresh, unique, or edgy performed better on our measure of breakthrough than "regular" advertising!
Based on this finding we concluded that we could reassure creatives that this particular research measure of breakthrough does not penalize creativity. We also concluded we could tell our clients, the people actually buying the advertising, that if they want evidence their new print advertisement is likely to do an efficient job of attracting the attention of their consumers they could a) submit the ad for a creative award (a slow process), b) hire an independent panel of experienced creative directors to judge the work (an expensive process), or c) conduct a simple piece of research.
(By the way, I should remind you that so far we are only talking about attention-getting power or breakthrough. Our experiment taught us something quite different about the ability of award shows to predict branding or persuasion - but that's another story.)
Replicating the executional performance measure on the Internet
Our next step was to prove that what we had done on paper could be translated into the world of the Internet. Consequently, we programmed a Web-based interview that would replicate our mall-based interview procedure.
In this experiment conducted with the Consumer Insights division of General Mills, we compared the results of a print study conducted among 150 target audience consumers with a matched sample of another 150 consumers who had been recruited to a site for the Web-based interview.
In the first part of the Web-based interview consumers were given the opportunity to "page" through (forwards and backwards) a series of 10 test and control ads that had been electronically scanned for the interview - a process designed to replicate the experience of the paper version of a clutter book in the mall-based interview.
The results of the experiment are shown in Exhibit 2. On average, the attention scores ("found the ad interesting") are the same regardless of whether the ads were shown to respondents in a paper or in an electronic format. Eight of the 10 ads generated virtually identical measures, while the Internet version generated slightly lower scores for two of the less intrusive ads.
Eye-tracking without an eye-tracking camera
Just as it is useful from a research standpoint to stop thinking of print ads as tactile objects, so it is important to not think of them as static objects. While it is true that, unlike television commercials, print ads do not move, the mind of the reader does move - through the ad.
Indeed, from the standpoint of diagnostic research, one of the more useful ways to think about the experience of print advertising is the retail metaphor (see Exhibit 3). If you were managing a retail store there would be three variables you would be trying to control to run your business effectively. First, you would try to be as creative as possible to generate traffic or get shoppers through the door. Second, you would lay out the products in the store in such a way as to either a) make it easy for shoppers to find what they came in the store to buy or b) maximize the chances that shoppers will discover something they want that could lead to an impulse purchase. Third, you would focus on making sure that shoppers actually buy something before they leave the store.
If we conceptualize the reader of print advertisements as "shopping information" we will see that a similar process applies here. The first job of a print ad is to stop readers and attract the attention of an audience - the equivalent of generating store traffic. Second, the layout of the ad should be such that readers assemble information and images in the correct order or sequence so as to place a compelling selling proposition in the shopping basket of the mind. Third, at the "virtual checkout counter" of the ad - the brand logo - readers should make a virtual purchase and leave the ad with an intention to buy something.
This dynamic way of thinking about how print advertising works can be a very powerful framework for providing insights during the creative development and optimization of a print ad. But importantly, research is needed to test creative assumptions about how a reader will actually enter and read through the ad. Currently, the gold standard for providing this kind of research is provided by eye-tracking cameras. Unfortunately, eye-tracking cameras and the Internet cannot easily be combined.
Developing an alternative technique that will work on the Internet actually requires that we look back to a time before eye-tracking cameras. We need to update, for the Internet age, an earlier methodology developed by cognitive psychologists. Remember tachistoscopes? A test using a sequence of controlled, time exposures to a stimulus ad provides a practical solution to the problem of measuring the path of the mind through the ad. Fortunately for us, every computer connected to the Internet has a highly accurate clock built into its core processor.
We measure the Flow of Attention of a reader through an advertisement with computerized exposure for three time periods: 1/2 second, 1 second, and 4 seconds. After each exposure respondents are asked what they think they saw. Then a response grid is shown on the screen and the respondents are asked to click on cells in the grid to report which part of the ad they were looking at. The results are then shown in a graphical display, as in Exhibit 4.
The first brief exposure is usually so short that the average reader has only enough time to see one thing. This helps to identify the "door" to the ad - the entry point of the mind - which is usually the key to attention-getting power. Notice that in our illustration the first thing the average respondent looks at is the face of Jamie Lee Curtis - not, interestingly enough, her legs. The second, a one-second exposure, shows that the reader next looks down at the visual of the L'eggs package. The third, a four-second exposure, shows the reader fish-hooking back to read the headline announcing the new hosiery line from L'eggs. In other words, the reader, shopping the information in this ad, appears to be answering the following sequence of questions: Who is it? What is it? What about it?
The probability distribution produced by the response grid can be displayed as flow graphs similar to those produced by eye-tracking cameras. But this approach actually has an advantage over eye-tracking. We are not measuring where the eyeball is pointing. We are measuring what the mind sees. And that's what we care about when we are analyzing advertising.
To gain insight into what the readers were thinking as they looked at the ad, the verbatims collected in the timed-exposure questions (What did you see?) are coded as part of the communications check of the advertisement. Exhibit 5 shows a comparison of the verbatims collected in the mall and those collected over the Internet as an add-on to the attention-getting power experiment reported above for one of the test ads. As you can see, the results obtained over the Internet are quite similar to those obtained in a live, personal interview.
Putting it all together
As a prescription for advertising research practitioners, a complete set of pre-testing measures should combine "report card" measures of overall advertising impact, such as attention-getting power and motivation, communication and branding, as well as diagnostic measures for fine-tuning the creative, such as ratings of liking, entertainment value, relevant news, and the more diagnostic flow measures such as the one described above.
All of these measures should be collected in an interview that is short enough for a respondent to reasonably complete. We have found that it is practical to do so, as shown in the procedure described in Exhibit 6.
More importantly, the measures should make sense for the people who are going to use them. In our case, we have tried to demonstrate face validity for one of the most important target audiences for advertising research: the agency creatives whose work is being measured.
Finally, the current state of print research is neither cost-effective nor time-effective for the vast majority of advertisers who spend large sums of money to touch their customers with magazines and other print media. But the high-tech world of the Internet has the potential to change all that.