Outlasting the fads
Editor's note: Doug Berdie is president of Consumer Review Systems.
In the past year, I’ve reviewed accumulated knowledge obtained over 45+ years as a customer satisfaction researcher/consultant. This review covered reams of data collected from small to Fortune 50 companies across dozens of industries, from business customers as well as customers of government and non-profit services, from customers in the supply chain as well as end-use consumers. Fads related to customer satisfaction have come and gone during that time and even some of those that have been discredited due to solid research analysis persist. The key insights I’ve obtained are summarized below.
Proven techniques exist for assessing how well your customer satisfaction measurement system serves your organization. A good starting place is to ask the following questions – each of which points to uses of time that could be better devoted to actually improving customer service and satisfaction:
- Do your employees spend time devising ways to artificially inflate their customers’ satisfaction ratings?
- Has your organization created integrity-check systems to prevent fraudulent ratings (e.g., verifying e-mail addresses of raters to ensure the ratings come from real customers)?
- Have your statisticians cornered you into using analytics which are so hard to understand that no one pays much attention to your survey results?
- Does your organization have a general sense of what needs attention (e.g., faster payment of invoices by customers) without having the actual details needed for improvement (e.g., ensuring that the purchase-order number appears clearly on all billing statements)?
- Does your organization know what your competitors are doing effectively to address the types of problems voiced by your customers?
The single most important reason an organization should collect customer feedback is to stimulate the company to improve its products/services so they best meet the needs of its customers. Doing so will lead to increased satisfaction and enduring loyalty from customers. I highlighted in a 2016 Quirk’s article (“A better use of your time”) how Brock White construction supply company did this successfully for over 20 years. Merely tracking numbers (such as the once-touted Net Promoter Score) to report on monthly/quarterly/yearly scorecards does not provide effective insights on the needed details of organizational performance. Further, doing so can deflect attention from more meaningful activities.
Many things affect whether a customer is loyal over time and whether they pick an organization for specific purchases. Past performance of the organization is one of the reasons (and not necessarily the most important). Stocking the exact product/service desired; how long it takes to get that product or service; price compared to other options; and speed and ease of getting an order placed are just a few obvious factors that, if unacceptable, will prevent orders – regardless of past customer satisfaction.
There is usually a strong relationship between a customer’s overall satisfaction rating and that customer’s “likelihood to buy again” rating. For example, in a typical large-scale, nationwide program, a 4.5 overall satisfaction average (on a five-point scale) generated a “definitely will buy again” response. But, as noted below, none of these general questions predict purchase behavior very well.
“Willingness to recommend,” “overall satisfaction,” “likelihood to buy again” and other similar “one-number” indices do not effectively predict sales, repurchase or other profitability results. One reason is the effect of the above-mentioned, specific-purchase-related factors that play major roles in buying decisions. In a three-year customer satisfaction program for an equipment manufacturer, we found that those customers giving a 10 or 9 (i.e., top-two box) response to a 10-point overall satisfaction question actually repurchased at lower levels than average across the entire customer base. This occurred for both post-sales and post-service surveys. The same thing was found in many other surveys. And, common-sense overall measures perform no better than the standard ones. I created a “totally satisfied” single number (called TOTSAT) whereby customers who were either satisfied or very satisfied with the aspects of their experience they deemed most important were labeled totally satisfied and anything less than this was labeled not totally satisfied. The correlation of this measure to subsequent sales (as reported in the 2016 Quirk’s article cited above) was so close to 0.00 it amazed me. But it was instructive! In fact, there are numerous studies that have shown, for example, that stores that achieve stronger financial results tend to be those that earn weaker customer satisfaction ratings. So it’s foolish to assume that raising scores will, necessarily, improve financial results.
Actual recommendation behavior predicts future purchases better than stated willingness to recommend. This is not surprising because what people have actually done is generally a good predictor of future behavior. Hence, if a “recommend” question is to be asked, it should be asked in the form of, “Over the past [some given period of time], how many times have you actually recommended [name of organization] to friends or colleagues?” A related research finding is that people often have a fuzzy, inaccurate notion of what really influences their behavior. So, merely asking why they bought or did something does not reliably yield accurate data in some cases.
Customers who are very satisfied often do differ from those who are very dissatisfied – i.e., the extremes do make a difference in future purchases and loyalty. For example, our finding in one study was quite typical in that those who were very satisfied showed a range of 39% to 100% purchase increase in the following three months vs. an 11% to 54% decrease among those who were very dissatisfied. Approximately the same size difference existed for each of the four consecutive quarters in which it was examined. In another typical study, quarterly sales (compared to the previous quarter) for dealers with low end-user satisfaction scores were 99% of the previous quarter, compared to 139% for those with highest satisfaction among end users.
Dealers who participate in manufacturer-sponsored customer feedback programs tend to see better sales increases and satisfaction scores than those who do not participate. Just “doing surveys” is not likely the key driver of this difference, as my experience has been that it is the top dealers who believe in participating in many manufacturer-sponsored programs, and their sales increases likely are a result of their overall enthusiasm to do so. Even though they started at about the same level, customers of dealers enrolled in a nationwide manufacturer’s program showed a 71% “VS+S” (i.e., top-two box) score compared to 29% “VS+S” of customers of dealers not enrolled. Again, it’s tempting to conclude that enrollment by itself caused this difference but that would be speculative given the overall differences in attitudes/behaviors of the two sets of dealers.
Many (if not most) customer satisfaction measurement systems no longer follow the basic rules needed to obtain reliable, valid data. When these programs first started, great attention was paid to: measuring a representative group of customers; using extreme care to word questions in an unbiased manner; obtaining high response rates; and using careful analytical techniques. At one point, contacting masses of customers who’d sent in warranty cards became vogue (and inexpensive), then flooding surveys over the internet to most (if not all) customers who could be reached started and recruiting customer panels followed as well. Not much attention was paid to proving that those being surveyed were truly representative of a defined customer base. Compounding this shortcoming is that minimal attention was paid to obtaining a representative response rate and almost no effort is made to find out why people were not responding. Given this poor statistical practice, to cite data precision numbers (such as saying, “Overall satisfaction is 4.1, plus/minus .2”) is not only incorrect but it also misleads decision-makers.
Once organizations realize that simple, “overall satisfaction” numbers (in whatever guise) are not particularly useful, it frees them up to put customer feedback to effective use. One simple, proven way to improve satisfaction and organizational effectiveness is to simply contact dissatisfied customers and ask them to detail the specifics of their dissatisfaction and to suggest ways the system could be changed to serve them better in the future. Another effective method is to abandon the one-question survey approach and return to a survey with a limited number of more detailed satisfaction questions (e.g., satisfaction with the ease of placing an order). Again, when dissatisfied customers are identified by this method, they can be recontacted to get details and suggested improvements. And, as described in the Quirk’s article mentioned above, assembling “customer councils” for in-person discussions is an extremely effective method to maximize organizational performance and, by extension, customer satisfaction.
Key for success
Staying in touch with customers so methods can be uncovered to ensure they are best served meets everyone’s needs – and is a key for organizational success. However, merely implementing overly simplistic customer satisfaction surveys, such as those with one question, has a poor history of meeting that objective. The more sincere methods used by successful organizations do, indeed, find ways to best meet customer needs and those methods should be copied by other organizations.