Editor's note: Tim Glowa is president of North Country Research Inc., Calgary, Alberta.
One of the fundamental purposes of market research is to understand a targeted audience. More specifically, many firms want to understand the basic attitudes of existing or prospective customers. Attitudes are based on the information respondents have, perceptions, past experience, their feelings (liking and disliking), and their intended behavior.
Beyond this, most firms really want to understand - and ultimately influence - consumer behavior. Rather than examining behavior directly (through techniques like discrete choice modeling or conjoint), many firms are more likely to examine attitude measures instead of behavior. There are several reasons for this:
First, it is easier to measure attitudes than it is to observe, measure, analyze, and interpret actual consumer behavior.
Second, there is a belief, and a good one, that attitudes influence behavior. This is especially true when thinking about brand equity. If a customer likes Westin Hotels more than Hilton Hotels, there is a good chance they will choose the brand they prefer.
Although attitude measures can be used to help understand how a product or competitor is positioned in the minds of consumers (or to understand the latent needs, desires, wants, and therefore uncover a potential opportunity) it has limitations. It is very difficult to extrapolate consumer behavior from understanding consumer attitudes. If the real goal is to understand behavior, attitude measurement itself may hinder this objective entirely. The reason for this is that attitude measurement itself, taken in isolation, can provide very conflicting conclusions depending on how the question is asked and what scale is used. Other techniques, such as discrete choice modeling or conjoint analysis, might be more useful (see Struhl, "Understanding a Better Conjoint than Conjoint," Quirk's Marketing Research Review, June 1994 or Glowa and Lawson "Discrete Choice Experiments and Traditional Conjoint Analysis," Quirk's Marketing Research Review, May 2000).
It is because of these limitations that I am very skeptical of research that relies entirely on attitude measurement and the attempts to directly link this to an understanding of consumer behavior.
For market research to be valuable, the user must have confidence in the validity and reliability of the results. I hope to illustrate, through the use of a real example directly comparing several alternative scales, that this may not always be the case.
This article discusses how several alternative attitudinal scales were tested on over 1,000 respondents measuring the same attribute yet they provided radically different empirical results. While not intending to be critical of existing practices employed by my colleagues in the market research industry, I am simply illustrating a potential problem, and recommending solutions to a situation that could be more widespread than we believe.
Background
A major Canadian airline measures customer satisfaction of its passengers on a monthly basis. This information, collected from about 1,000 respondents from across Canada traveling on a range of flights (short-haul versus long-haul), is compared to historical data to measure and track how passenger satisfaction is changing over time, and to identify any shortcomings in the product offered.
From the thousands of passengers traveling with the airline each month, a representative sample is pulled from the database of customer records and used for the CATI interview. Interviews are conducted in both English and French (this is Canada, after all). Each interview covers a range of topics measuring satisfaction with the entire flight process (check-in, boarding, in flight, claiming baggage), and lasts about 15-20 minutes. The results presented below are from the April 1999 tracker and are used with permission.
The results
Respondents were asked to rate a variety of attributes using the standard scale of excellent, very good, good, fair, and poor. In this test, the same respondents were also asked to provide a rating of the same attribute using alternative scales. The attitude that was measured in this case related to the satisfaction with the courtesy and professionalism of the flight attendants.
Each of the scales used in this study, with the corresponding question and a brief discussion, is presented below.
- Standard scale - excellent to poor
For the question "Please rate the courtesy and professionalism of the flight attendants on your most recent journey," 70 percent gave the airline either excellent or very good marks.
The first observation relating to this scale is that it is not balanced; there are three "good" measures compared to only two "not so good" measures. Many researchers believe it is important to have a balanced scale (Dillman, 2000).
There has been much discussion about the merits of this scale, both in this journal and in other market research publications. One of the best criticisms of this scale appeared in Quirk's in 1997 from Joseph Duket (Duket, 1997):
According to Funk & Wagnalls, the term excellent means "being of the very best quality." ... One person or company can not be more excellent than another. If you're doing the best job or providing the best quality, no one can do better. For the word "poor," however, Funk & Wagnalls uses the synonyms inferior and unsatisfactory in its definition. Confusion arises when the word inferior is described as "lower in quality, worth or adequacy; mediocre; ordinary." Mediocre is then defined as "of only average quality." So, taking this exercise in interpretation to the extreme, poor performance could actually mean an average rating.
- Completely satisfied to completely unsatisfied
An alternative six-point balanced scale with only the two end-points defined was also used to measure satisfaction. In response to the question "Using a scale of 1 to 6, where 6 is completely satisfied and 1 is completely unsatisfied, how satisfied were you with the courtesy and professionalism of the flight attendants?" 82 percent selected the top two boxes.
This scale is attractive because of the definitiveness of the two end-points; if a customer is completely satisfied, it is not possible to improve satisfaction.
This approach, suggested by Steven Lewis in a previous article in Quirk's (Lewis, 1997), argues that the intermediate rating terms used to describe satisfaction variables have different meanings in other countries, and proposes using a dichotomous adjective scale where only the definitive end-points are defined (e.g., such as "totally satisfied" to "totally unsatisfied"). This scale represents a tremendous improvement over the prior scale.
- Asking respondents directly
The third scale tested in this study simply asked the respondents if they were satisfied. In response to the question "Overall, were you satisfied with the courtesy and professionalism of the flight attendants?" 95 percent of the respondents replied "yes."
This approach is certainly the most direct; the researcher gains a very clear understanding of how a customer feels about the service. There are no questions about combining the top two boxes (or should it be the top three?) to represent those who are satisfied. This scale provides a very real measurement of a firm's performance.
Propositional descriptive
The final scale tested is the propositional descriptive scale. This scale not only provides an assessment of the degree of satisfaction with a given product, but also quantifies what steps are needed to make an improvement. This scale requires respondents to select the appropriate descriptor corresponding to their level of satisfaction (Glowa and Lawson 2000b). In response to the question "Please rate the courtesy and professionalism of the flight attendants towards serving you. Were they...", customers selected from a series of statements. These statements, with the percentage of passengers who selected each, appear in the chart.
Statement |
% responding to |
Made you feel that they were genuinely pleased to serve you and happily provided assistance |
47% |
Were pleased to serve you and provided assistance when required |
41% |
Made you feel like they were just doing their jobs when serving you |
12% |
Although the previously described scales all provide, to some measure, an assessment of how satisfied customers are at a particular point in time, they are difficult to act on strategically. The propositional descriptive scale goes beyond these traditional scales by providing a quantifiable, non-subjective measure of satisfaction that can easily be understood and made actionable, if desired, by senior management.
Unlike a scale that measures flight attendant satisfaction as "good" or "very good" this scale provides both a very clear measurement of performance, but also a ruler against which future satisfaction can be measured. The vice president of in-flight operations for the airline might struggle with how to improve satisfaction beyond "good" to "very good" or "excellent." Is it the friendliness of the flight attendants? Are they not smiling? Are there not enough flight attendants? Are they being rude to passengers? Conversely, if the same vice president is provided a customer satisfaction report using the propositional descriptive scale, and wanted to improve satisfaction among passengers, he has a greater understanding of the problem, and the ammunition to correct it.
Attitude measurement
This article identifies several different attitude measurement scales, and reveals how the same respondent can reply to each unique scale. All measure the courtesy and professionalism of flight attendants, although the resulting satisfaction level varies widely depending on the scale used.
Attitude measurement is useful to uncover perceptions and opinions, but not necessarily to provide a strategic linkage to actionable understanding of consumer behavior. To act on this information strategically, two things need to happen:
- The researcher needs a clear understanding of what steps are needed to move from one rating scale to the next.
- The researcher needs to understand whether change will affect consumer behavior (i.e., will it make any difference, and if so, by how much?).
By having both of these pieces of information, the firm can, through a cost/benefit analysis, determine whether or not the improved satisfaction rating is worth it. It is easy to improve satisfaction; a company can always provide more training for staff, or have more staff on hand to assist customers, but the difficult question remains whether a firm should implement improvements at all. Unless the relationship between attitudes, performance, and customer behavior is correctly identified, the costs associated with an improved attitude measurement may exceed the benefit.
There are several methods available for quantifying the relationship between satisfaction levels, price, and future behavior. The most common are conjoint analysis, and the more improved technique of discrete choice modeling. For a discussion on these techniques, please see Glowa and Lawson, 2000a; Glowa and Lawson, 2000b; Orme, 2000; McCullough, 1999; Orme, 1998; Struhl, 1994.
References
Dillman, Don, Mail and Internet Surveys: The Tailored Design Method. Wiley, New York, 2000.
Duket, Joseph, "Comment cards and rating scales: Who are we fooling?" Quirk's Marketing Research Review, May 1997. [QuickLink No. 257]
Glowa, Tim and Sean Lawson, "Discrete choice experiments and traditional conjoint analysis," Quirk's Marketing Research Review, May 2000(a). [QuickLink No. 592]
Glowa, Tim and Sean Lawson, "Satisfaction measurement: is it worth it?" Quirk's Marketing Research Review, October 2000 (b). [QuickLink No. 618]
Lewis, Steven, "The language of international research: 'Very satisfied' and 'totally satisfied' are not the same thing," Quirk's Marketing Research Review. November 1997. [QuickLink No. 295]
McCullough, Dick, "Making choices: using trade-off analysis to shape your new product," Quirk's Marketing Research Review, May 1999. [QuickLink No. 491]
Orme, Brian, "Hierarchical Bayes: Why all the attention?" Quirk's Marketing Research Review, March 2000. [QuickLink No. 574]
Orme, Brian and W. Christopher King, "Conducting full-profile conjoint analysis over the Internet," Quirk's Marketing Research Review, July 1998. [QuickLink No. 359]
Struhl, Steven, "Discrete choice modeling: Understanding a better conjoint than conjoint," Quirk's Marketing Research Review, June 1994. [QuickLink No. 86]