Understanding vulnerability analysis
Editor's note: Randy Hanson is director, analytical services, for Maritz Marketing Research, St. Louis.
Often when conducting a customer satisfaction study, the vital questions are, "Where should I allocate scarce resources to best satisfy and retain my customers?" and, "Should my focus be sales training, customer service, or billing procedures?" A compelling tool frequently used to help answer these questions is a graph showing each attribute's importance and satisfaction levels in a simple two-dimensional scatterplot. These graphics are known by many names such as importance/performance analysis, quadrant analysis, and performance improvement planner.
Vulnerability analysis is also useful in prioritizing customer satisfaction attributes for action or consideration. This technique goes beyond the sometimes ambiguous plots of each attribute's importance and performance to give the marketer something truly valuable: an attribute list ordered from highest to lowest priority. In a world of black boxes, vulnerability analysis is almost elegant in its simplicity. Though this discussion focuses on customer satisfaction studies, the approach is useful in many marketing research applications. The more you use vulnerability analysis the more you will like it!
What is vulnerability analysis?
Vulnerability analysis begins with a crosstabulation. This familiar, simple method of multivariate analysis does not carry the baggage of assumptions needed for other methods; missing data problems, departures from normality and linearity, etc. are of little or no concern. Vulnerability analysis simply shows the relationship between an individual attribute and a dependent measure (e.g., overall satisfaction, repurchase intention, likelihood to recommend to a friend).
The crosstabulations involve each attribute in turn as a "stub" or row variable, and the dependent measure as the "banner" or column variable (see table 1 for an example). The scales used for the dependent measure and the attributes are collapsed into two categories. This categorizing can be achieved in several ways. If your company will accept no less than a perfect "10," the categories will be 1-9 versus 10. If you decide an "8" is acceptable, the breaks will be 1-7 versus 8-10. In a tracking study, where satisfaction levels hopefully increase over time, the categories can be altered to reflect the new higher target, such as switching from breaks of 1-8 versus 9-10 to 1-9 versus 10.
It is probably better to be less subjective and categorize the data based on current satisfaction levels and breaks which are evident in each variable's frequency distribution. Perhaps the best method is to categorize each attribute using a CHAID or CART search procedure. Basically, if respondents who rate an attribute "10" give significantly higher overall ratings, then breaks of 1-9 versus 10 are appropriate. On the other hand, if respondents who rate an attribute "8," "9," or "10" all give similar overall ratings, these scale points should be grouped together, so reasonable breaks are 1-7 versus 8-10. Since there is no benefit in terms of higher overall satisfaction, a target rating of "8" is all that is needed. Each attribute, then, potentially has its own unique target.
Let's consider each crosstabulated frequency from our example in more detail. Beginning in Cell D, we find that 151 customers are satisfied overall as well as satisfied with the customer service department (high ratings for both). At least for this attribute, these respondents do not seem particularly vulnerable to competition.
Moving to Cell B, we find 40 customers are satisfied overall (9 or 10) but less satisfied with the customer service department (1-7 ratings). The lower marks for customer service have no adverse effect on the overall rating for these people. Even though there is room for improvement, greater focus on the customer service department is wasteful for this group.
Continuing to Cell C, we see that 29 customers rated the customer service department high, but overall satisfaction low. Something is having a negative impact on the overall rating , but it is not likely to be the customer service department. Since this group is already highly satisfied with this attribute, it would again be a waste to invest more resources here.
For varying reasons, increased focus on and investment in, the customer service department may be inefficient for these first three groups. What about the customers in Cell A of the crosstabulation? Here we find 80 people who are relatively less satisfied both overall and with the customer service department. Dissatisfaction overall may be related to dissatisfaction with this attribute for this customer group. Without an improvement in the customer service department, these 80 customers are vulnerable to a competitor (hence the name vulnerability analysis ). Vulnerability is simply the number of customers who are simultaneously less satisfied with an attribute and less satisfied overall. They are therefore vulnerable to a competitor who can outperform you on that attribute.
In a strict sense, vulnerability as defined above should be called relative vulnerability. You can argue that all customers are vulnerable to a competitor who, for example, offers a superior product or service at a lower price. This analysis focuses on the core group of customers who are most vulnerable to competition on this specific attribute given the existing situation in the marketplace.
Similar analyses are conducted for each attribute in the study. The output for vulnerability analysis is a table of all attributes ranked by their corresponding vulnerability. The attribute at the top of this list answers the question, "If satisfaction remains at current levels, on which attribute am I most vulnerable to competition?"
Vulnerability can also be expressed as a percentage, but the correct base in most cases is the number providing a valid overall rating in the study as a whole, not the crosstabulation total. In this way, the percentage reflects the degree to which a problem exists in the total market: other things equal, an attribute that affects 100% of customers is of greater concern than an attribute that affects only 10% of customers. For our example, the note states that 400 customers provided valid overall ratings. Only 300 were able to rate both the customer service department and overall satisfaction (some respondents were not able to rate the customer service department, probably because they had no experience with it). The customer service department is a factor, then, for three-fourths of your customers. To include this effect, our vulnerability percentage should be calculated as 80/400 = 20%. If for some reason you would like to factor out this effect (e.g., assuming that all customers will eventually be affected) the calculation is 80/300 = 27%.
The components of vulnerability
Vulnerability analysis is a deceptively simple concept. On its face, it is the result of an artless crosstabulation. On closer inspection, vulnerability accounts for three separable and quantifiable components:
1. Extent or proportion affected. This component is the percentage of customers able to rate (i.e., have experience with or interest in) an attribute. It represents the pervasiveness of this attribute among customers. The grand total for each attribute's crosstabulation is limited by the proportion of customers affected. If other things are equal, the higher the percentage of customers affected, the higher the vulnerability. For our example, this component equals 300/400, or 75%.
2. Severity of the problem, or attribute satisfaction level. This percentage determines how many customers make up the total for each row. For example, the crosstabulation total times the less satisfied percentage determines the first row total. If other things are equal, an attribute with lower satisfaction levels produces a larger vulnerability. For our example, this component is 120/300, or 40%.
3. Impact, or strength of association with dependent measure. The stronger the link with the dependent measure, the more likely customers who are less satisfied with an attribute will also be less satisfied overall. This component can be represented by the chi-square statistic or the correlation coefficient. It can also be calculated as the percentage less satisfied overall among customers less satisfied with an attribute. If other things are equal, the stronger the association with the dependent measure, the larger the vulnerability. For our example, 80/120 = 67%. Note: If the overall rating and this attribute were not related ( i.e., statistically independent) we would expect a Cell A frequency of only about 44, not 80.)
All three of these components combine to determine vulnerability. Breaking vulnerability into its components leads to two common-sense recommendations:
- Focus on aspects that represent severe problems, have an impact on the dependent measure, and affect a large part of your customer base.
- Pay less attention to those aspects where you are doing well, the link with the dependent measure is weak, and only a small percentage of customers are affected.
The output from vulnerability analysis, in addition to a ranking of the attributes, can also show how vulnerability separates into its three components - extent, severity, and impact. We can determine the major contributor toward a large vulnerability by comparing the relative magnitudes of the components with those for other attributes. Assume we have two attributes that affect all customers (extent = 100%). An attribute with mediocre satisfaction ratings but high impact may have the same vulnerability as an attribute with feeble satisfaction ratings and low impact. Although the vulnerability to competitors is comparable, the prescription for action may be quite different.
Critics at this point may say this approach is fine as far as it goes. But since each attribute is considered individually, we have not revealed any of the interplay or relationships that are sure to exist among the attributes. These important associations can be discovered using other multivariate techniques such as variable clustering, or factor analysis followed by a multiple regression on the derived factors. An interesting and often fruitful approach is to conduct a TURF analysis on vulnerability data. This analysis shows the groups of two, three, or more attributes that most efficiently and parsimoniously minimize your vulnerability to competition.
Summary
Vulnerability analysis gives the marketer one number or percentage for each attribute to determine where to invest or cut back. The same analysis can be conducted on your competitors to discover their weaknesses. Vulnerability analysis goes beyond standard Importance/Performance plots by, 1) making attributes with widely differing impact (importance) and severity (performance) directly comparable, 2) explicitly accounting for varying base sizes across attributes, 3) eliminating the restrictive assumptions required by other multivariate methods (e.g., regression/correlation, partial least squares, LISREL), and most importantly, 4) yielding more intuitively appealing results.
As stated earlier, vulnerability analysis seems deceptively simple - which may be enough reason for some practitioners to reject it. This simplicity also represents its greatest strength - it is easy to communicate the results to non-researchers. Like any other technique, it may have shortcomings. Still, vulnerability analysis can be an important piece when you are trying to solve the customer satisfaction puzzle.