Editor’s note: Tim Huberty is president of Huberty Marketing Research, St. Paul, and an adjunct professor in the Graduate School of Business at the University of St. Thomas, Minneapolis.
Oil and water. Stripes and plaids. Research people and creatives. Qualitative and quantitative research. Some things just don’t mix.
For as long as anyone can remember, quantitative and qualitative research have been incompatible. Quantitative research is numbers. Statistics. Objective. Cold. Clinical. Standard deviations and standard errors and all that other scientific-sounding stuff. On the other hand, qualitative research is feelings. Touchy feely. Emotions. Getting in touch with the "inner consumer." Stirring those things which really determine why people buy the things them do.
You can’t mix the two, quantitative and qualitative - or those two types of people. Quantitative people are left-brain people, qualitative people are right-brain people. They just look at the world in completely different ways. Marketing research firms scoff at ad agency "researchers" as being too "loosey goosey." On the other hand, account planners and similar souls at ad agencies often counter that you "miss the forest when you talk to too many trees."
So everybody goes down their own separate paths, always suspicious of the other. Once in a while, quantitative people sneak a few open-ended questions in their surveys to gain "insight" into the numbers. But they are quick to convert those diagnostic verbatims to into cold, unfeeling two-dimensional numbers before anyone becomes the wiser. And "facilitators" continue to listen to people "spill their guts," afraid at compromising their integrity by adding up the number of common lamentations.
Beyond feelings
The problem is that qualitative research is like fixing your brakes. "The squeaky wheel gets the grease." Oftentimes - unfortunately - the person who shouts the loudest or makes the funniest quip receives undue credit for those comments. Those are the comments which are remembered longest, given the most weight. The Rule of Thumb in qualitative research has often been "One-third of the people contribute spontaneously, one-third will contribute if you call on them and one-third are ‘throwaways.’" The "throwaways" are seldom given any credibility. And so, qualitative research becomes even more selective, even more qualitative.
Beyond hearing what everybody said (even those who never said anything), a second major problem confounding qualitative research actually parallels a problem faced in quantitative studies. In a survey, when respondents are asked to rank several items, you have no idea about the relative status of each item. For example, respondents are asked to rank why they selected a certain type of pizza delivery. They are given the options of food quality, price or speed of delivery. Unfortunately, when they rank those three items, one never knows just how important one item is relative to the other two. In other words, food quality could be 99 percent responsible for selection, and yet by simply ranking the items one-two-three, they all are given equal status. That problem was solved in survey research by asking people to "weight" their choices. In other words, respondents are given "100 points" and asked to indicate how much they really prefer each of several options by assigning points to it. Thus, you know how much each item was preferred, relative to the others.
Qualitative weighting
Unfortunately, qualitative interviewers and focus group moderators have never been that smart. A few might ask participants to "show hands," but observers are under strict orders never to "count the noses." But why not? Why can’t that same technique which has proven very successful in quantitative research be applied to focus group discussions or even one-on-one interviews. Hence, we have the birth of the "Bean Test."
For quite some time, participants have been jotting down their answers before the group discussion begins. This makes people accountable for their answers - and helps to make sure that the most articulate (i.e., first answer given) respondent does not influence everybody else. Ironically, however, many participants continue to nod publicly to what they have not written down privately. So those initial thoughts never make it on the tape - or in the report.
In the Bean Test respondents are told to make choices based upon being given 100 beans. For example, in a focus group setting, participants have just been shown several different ad concepts. The moderator says,
"I want you to indicate how much you like each of the ads by giving them between 1-100 beans. You allocate your beans by how much you like each ad. If you really like one ad, you can give it all your beans. You don’t have to give beans to every item. On the other hand, if you like several ads somewhat, divide your beans according to how much you like each one. You cannot split your beans and they must add up to 100 total."
Now participants have the opportunity to vote. And the results are oftentimes remarkably different than what the "squeaky wheels" have been pontificating about. For example, a few months ago I was conducting focus groups with consumers, showing some rough ad concepts for potato chips. A few people really liked one ad in particular and made no secret about their enthusiasm. Soon everybody in the room was bleating contentedly. At the same time, the other two ads were receiving a fairly positive reception. The people from the ad agency in the back room initially thought they had a clear-cut winner. And yet, after the groups were over and the participants had gone home and we looked at the "Bean Results," we found that even the squeaky wheels who had been so boisterous about "their" choice had split their beans fairly evenly. In fact, when the beans were added up over several groups, the one ad that the squeaky wheels had championed actually ended up in last place.
And so, right now, those qualitative purists who haven’t gone into cardiac arrest are shouting, "You can’t count noses in a focus group! It’s only eight to 10 people. It’s not a quantitative sample. It’s a bastardization of qualitative research." But why can’t you? For one thing, it is qualitative research. You’re still getting the "touchy feely" feedback which is so critical to this type of research. Plus - by adding up the beans across several groups - or one-on-one interviews, you’re coming suspiciously close to generating a quantitative sample. For example, if you’ve done four focus groups of eight people each (not an uncommon practice), you’ve talked to 32 people. Both ad agencies and research suppliers have acknowledged for years that "30 is smallest quantitative sample I’m comfortable with."
"Reach and frequency" also count in the Bean Test
But counting beans is not merely enough. You also have to look at the total number of people who give beans to each and every item - or conversely, do not give any beans to one or more items. I had this situation just a week ago. I was showing focus group participants (nine groups of six people each, or a "quantitative sample" of 54 people) some ideas for a new magazine cover. To the surprise of everyone, once the beans were counted, the old "standard" cover received almost as many beans as two or three of the new ones. And yet, in the groups themselves, the participants had been fairly enthusiastic about the new designs.
When I looked at the participants’ notes, however, I noticed that a few respondents had given all 100 beans to the old covers. Thus, five or six people had really added unrealistic preference to the old design. So it’s crucial that you look at not only the total number of beans, but the number of bean givers. Sort of what media people have been preaching about "reach and frequency" for years. An interesting sidebar: The high number of beans given by those few people also told us that the old "standard" cover had a lot more emotional pull than we had originally suspected.
Finally, it’s also important that you look at the number of items that do not receive any beans. Items with no beans represented "rejectees."
Greater client satisfaction
Clients without exception love the Bean Test. All of a sudden they are part of the qualitative research process. Most have more quantitative training anyway, so this helps them "get a foot in the qualitative door." More importantly, it removes some of the mystery - the hocus pocus - of relying on the magic, interpretative powers of a moderator who "just knows because I always do this." On the other hand, as you might expect, facilitators are not too happy when their esoteric powers are subject to demythologizing.
And so, just as mother once told you to "eat your veggies," it’s time to "count your beans." The benefits greatly outweigh the risks. And best of all, in qualitative research the opinions and attitudes of every human BEAN get counted.