Editor's note: Randy Hanson is director, analytical services, for Maritz Marketing Research, St. Louis.
In a typical customer satisfaction study, respondents evaluate one or more products or services. Satisfaction on an overall basis is rated, followed by ratings on many individual attributes. A key question for marketers is which attributes are most important in determining overall satisfaction. Certainly, not all attributes have equal impact. A method of prioritizing is needed to allocate limited resources more efficiently.
Researchers have suggested many procedures for dealing with this problem. Several are considered by Green and Tull (1975) and Hauser (1991). Work continues in this area; no one true 'answer" for all applications has emerged. One broad classification of the many approaches is stated versus derived importance.
Stated importance
These measures ask respondents to explicitly state their perception of the importance of each attribute. Three examples of how these data are collected are shown below:
- 10-point scale. Rate the importance of …..... from 1 to 10, where 10 means "extremely important" and 1 means "not at all important."
- Ranking. Rank the following 17 attributes in terms of their importance to you. Rank the most important attribute "1," the second most important attribute "2," and so on.
- Constant sum. Allocate 100 points to the attributes listed below to represent how important each is to you. You can use any number between 0 to 100, just make sure the points you give an attribute reflect its importance, and that your answers sum to 100.
A few major corporations have chosen stated importance as their preferred method. Reasons include the face validity of the results and the method's straightforward administration and interpretation. This view is supported by Hauser, who writes "self-stated importances (have) sufficient validity."1
While stated importance is valid in theory, its performance in practice is uneven. The techniques assume respondents are able to reliably assign a number to their perception of importance. Depending on the particular population sampled, this assumption can represent a substantial leap of faith. In addition, interviews using stated importance measures are longer, more repetitive, and more tedious.
Perhaps a greater criticism results from the usual procedures for customer satisfaction studies. Qualitative research is first conducted to construct a list of attributes, and the attributes retained for the quantitative phase of the study are generally the most important of these. This situation leads to high ratings for most attributes. The result can be few (if any) statistical differences among attributes, so the aim of the study - to prioritize the attributes -is thwarted.
Derived importance
Derived importance relies on the statistical association between attribute ratings (predictors) and an overall rating (criterion). The importance of an attribute is statistically determined from this relationship.
As an example, assume we have two products, Q and R, rated on two attributes, 1 and 2, and that product performance on the attributes can be summarized as:
Product Q |
Product R | ||||
Attribute 1 |
Excellent |
Poor | |||
Attribute 2 |
Poor |
Excellent |
Say that we also know Product Q has higher overall satisfaction levels than Product R. We can then infer that Attribute 1 (on which Product Q excels) is more important than Attribute 2 (where Product R excels) in determining overall satisfaction levels.
Green and Tull consider four derived importance measures. In the almost impossible case where all attributes are uncorrelated with each other, all four yield identical measures of relative importance.
- p - bivariate (Pearson) correlation. This measure has the advantages of familiarity and simplicity. Unlike the other three, it is not affected by adding or deleting other attributes in a regression equation. Its principal disadvantage is that joint effects with other attributes go undiscovered.
- b' - standard regression coefficient or beta weight. A standard part of most regression output, it can be quite unstable. A raft of somewhat unlikely assumptions must be satisfied for these measures to provide the best estimates. Model misspecification and the influence of other attributes in the regression model are particularly troublesome.
- b'*p - the product of the beta weight and the corresponding Pearson correlation. This calculation is an interesting compromise between the two earlier measures. Even in the presence of correlated predictors it has the property of summing (across all attributes) to R2, the percentage of variance explained by the regression model. It shares the disadvantages of both measures that comprise it.
- p (m-1) - the coefficient of part determination. This measure represents the incremental gain in predictive power when an attribute is the last one added to a regression equation. It is also adversely influenced by the inclusion or exclusion of particular attributes in the model.
It is commonplace in customer satisfaction research for the attributes to be correlated -sometimes highly- with each other. The problem, called multi-colinearity, makes it difficult to disentangle the separate effects of the individual attributes on overall satisfaction. The latter three measures are all subject to instability when multi-colinearity is present. When intercorrelations exceed .5 -a fairly frequent occurrence for customer satisfaction data -the beta weights can shift can shift dramatically, perhaps even changing sign.
The latter three measures can also be affected by the addition or deletion of particular attributes to the regression model. Additionally, the omission of important attributes from the study is a concern. Finally, the multiple regression model used for the latter three measures must have the correct functional form.
Green and Tull conclude their discussion by stating:
"Unfortunately, all four measures exhibit limitations in the case of correlated predictors and the question of relative importance remains ambiguous." 2
The potential pitfalls from using the latter three measures can make the situation more unclear than it is already. In the face of this murkiness, it is best to use the first measure, simple bivariate correlation.
But, considering each individual attribute in isolation is also unrealistic. Given the collinear structure of consumer satisfaction data, what can be done? Green and Tull offer three alternatives to combat multi-colinearity. First, you can ignore it. This may be acceptable for forecasting applications, but not for the task at hand.
Second, you can delete one (or more) of the highly correlated attributes. This solves the statistical problem while creating others. A managerial problem, for example, might be questions such as, "Why is my attribute not included in the analysis?" My attribute is not the same as the one included!"
Third, you can transform the original attributes into an uncorrelated set of new variables using several techniques (e.g., principal components analysis). The last alternative is the best. The principal components reveal the structure present in the data while allowing analyses such as stepwise multiple regression to be performed without multi-colinearity problems.
Conclusions and suggestions
The study of attribute importance will continue in the future and some of the viewpoints discussed here may change. Given the current state of affairs and the reasons cited, use a derived measure of importance over a stated measure. It is intuitively appealing to this researcher to use both the actions of respondents, as revealed by their rating patterns, as well as their words to determine attribute importance.
Among the class of derived measures, use the simple bivariate correlation between an attribute and overall satisfaction. Theoretical and practical experiences with many real-world customer satisfaction datasets support these recommendations.
Notes
1 Hauser, John R., "Comparison of Importance Measurement Methodologies and Their Relationship to Customer Satisfaction", M.I.T. Marketing Center Working Paper 91 -1, January 1991 (Cambridge, Massachusetts).
2 Green, Paul E. and Tull, Donald S. Re search for Marketing Decisions; 3rd edition; Prentice-Hall, Inc., 1975 (Englewood Cliffs, New Jersey), pp. 478-484.