A better use of your time
Editor's note: Doug Berdie is president of Consumer Review Systems, a Minneapolis research firm.
Measuring customer satisfaction has been an organizational imperative since the 1980s. Since that time, the acceptance of a false assertion by management executives and research practitioners has limited the value of surveying customers. The false assertion is: If customer satisfaction scores increase, then revenue and profits will necessarily increase.
The reason this assertion is false is because it talks about scores not actual business activity. Strict adherence to this false assertion has blinded many from seeing, and focusing on, two true assertions:
If activities that better meet customer needs are implemented, then a positive impact on revenue and profits will usually result.
If customer needs are better met, then customer satisfaction scores will usually increase.
Let’s examine the consequences that flow from accepting the false assertion and not focusing on the two true assertions.
Truth: Increased customer satisfaction scores do not necessarily result in increased business financial results
Like many myths, the false assertion has been repeated so often most people unquestioningly accept it as fact. The purported chain of events that underlies the assertion (from satisfaction scores » repurchase » loyalty » positive financial results) seemed to make sense – despite a lack of data to support it. Now we are able to examine the extent to which satisfaction scores do predict financial results. And the truth is that customer satisfaction scores do not reliably predict bottom-line financial results. They are nothing more than scores and many things influence scores that are not related to the actual experience customers receive.
A solid business-to-business example comes from Brock White Company LLC, a North American construction products/materials distributor, which recently made available 20 years of customer satisfaction data coupled with the most recent 12 years of revenue data. The detailed analyses allowed for company-wide, branch-level and individual customer examinations.
These analyses revealed:
- At a corporate level, 20 correlations looking for satisfaction scores as a leading, concurrent or lagging indicator of revenue yielded correlations ranging from -.065 to .067 (with 11 of the correlations negative) – so close to zero that little doubt can exist as to the lack of association.
- At a branch level, the more than 500 correlations, taken as a whole, show no relationship between satisfaction scores and revenue or year-to-year revenue change.
- At an individual customer level, the revenue generated by customers who were surveyed over the measurement history does not correlate with their customer satisfaction scores.
In fact, of the hundreds of analyses conducted, about half produced negative correlations – astonishing, given the “common sense” belief that satisfaction scores would lead to positive correlations with revenue.
Table 1 shows the results of the 515 correlations conducted to see if branch-level satisfaction scores correlate with branch-level revenue or branch-level year-to-year revenue changes. The right column (summarizing all correlations) approximates a normal curve, which is what one would expect in a sampling distribution where no correlation between satisfaction scores and revenue exists. The second through fifth columns show that, no matter how data were examined, the results also approximate a normal curve distribution.
These low correlations were not solely the effect of how an “overall customer satisfaction” measure was worded or calculated – or the number of scale points used or other technical issues. In addition to looking at data generated from a single, “overall satisfaction” question, the analyses took into account a “TOTSAT” number derived for each individual customer that indicated whether that customer was satisfied with the aspects of the experience deemed most important to that customer. On its face, this TOTSAT should produce a more robust measure of overall satisfaction than a single question and, in fact, one reason this analysis was conducted was to prove that it does. But it does not. The TOTSAT scores are as poor a predictor of revenue as are the scores from the single “overall satisfaction” question.
If this were the only study that disproved the supposed predictive relationship between satisfaction scores and revenue, that would be one thing. But in a May 2015 Quirk’s e-newsletter article (“Why conventional customer surveys create a false sense of security”), David Trice indicates, “Most C-suites haven’t seen a correlation between the data from these [customer metrics] and financial lift,” and a review of marketing and research literature shows this lack of a link to be the rule rather than the exception.
Extensive purchase criteria and win-loss research conducted over many years shows that customer satisfaction is only one of many factors influencing actual purchase decisions – with others including factors such as 1) product/service availability, 2) how well the product/service is deemed to meet the immediate need, 3) other options, 4) price, 5) how well the product/service can be supported by the vendor and the prospective client’s organization, etc. In addition, more cyclical industries have such severe downturns and upswings that either no one is buying or everyone is – regardless of satisfaction levels. Previous experience (i.e., “customer satisfaction”) with the vendor is only one of many factors. So, would one really expect to make the sale in a situation where customer satisfaction is high but the product the customer needs at a given time is not deemed to meet needs as well as a competitive product that can be delivered more quickly at a lower price?
Negative consequences of failing to acknowledge the truth
The key negative consequence of ignoring the truth is that organizations stress improving scores – at the expense of improving products and services. This is an example of organizations reinforcing the wrong thing and not, therefore, obtaining the desired effects. Companies that continue to believe customer satisfaction scores predict key financial measures put these scores on their balanced scorecards. And, because they want the scores to increase, they charge their organization to “make the scores go up.” Management, then, compensates employees based on raising the scores.
Basic psychology has, or should have, taught us that when one’s compensation is tied to a score, one’s focus is on how to influence that score favorably – by any means possible – and many ways that affect a score have nothing to do with improving the customer experience.
Hence, the history of customer experience measurement is rife with creative ways employees and channel partners affect customer satisfaction scores. A small sampling includes: cajoling customers into giving high scores (either by providing extras or badgering customers to give high scores by explaining the importance to the employee or channel partner of getting a high score); employees and channel partners intercepting survey invitations and completing the surveys themselves; deleting from sample lists customers it is believed would give low ratings; doing “pre-surveys” of customers and deleting from the sample list those who feel negatively; and countless other creative ways to raise scores. Some employees go so far as to encourage friends and relatives to complete surveys so the scores can be artificially inflated.
Related to the above negative consequence is the costly cottage industry that has arisen designed to identify “cheating” and remediate its effects. Such tactics (all of which take time, focus and resources away from actually improving products and services) include: contacting customers to verify they actually completed a survey; implementing complex analytical algorithms to eliminate surveys where response patterns seem “suspicious;” assessing how long it took to complete the survey in cases where it was done online (on the assumption that people who did not take sufficiently long must not have given well-considered responses); reviewing IP addresses to check for company-owned computers that may have been used to complete the survey; and many other such activities. And, because compensation is tied to scores, entire subterranean cultures have sprung up within some companies to ensure high scores. Employees review each survey result and find reason to contest negative ratings so they can be discarded on the basis of the customer being “confused” when completing the survey or having not spent enough time considering the question to give a “thoughtful” response. Much time is wasted dealing with appeals from employees and channel partners who contest the ratings they received – leaving less time to deal with actual product and service issues the ratings may have revealed.
A second major negative consequence of ignoring the truth is that many organizations have limited the number of questions on their customer satisfaction surveys. An extreme example of this is asking only one question (an “overall satisfaction” question or some type of “recommend” question). These short, one-question surveys are used based on the mistaken belief that resulting data predict financial outcomes. The move toward shortened surveys meant the details needed to actually identify where focus should be directed to drive quality improvement was lost. So, because satisfaction scores, in fact, do not predict financial outcomes, these “short surveys” provide virtually no value at all.
Positive consequences of acknowledging the truth
A primary positive consequence of acknowledging the truth is the freedom to stop compensating employees and channel partners on scores and cut out the waste in resources that tags along with that useless activity. Removing the customer satisfaction number (in whatever guise it takes) from the balanced scorecard will communicate to employees and channel partners that instead of compensating based on scores, compensation will be tied to actions that benefit the organization and make it easier for customers to buy. Let’s look at how this works.
Years ago, a client decided to compensate employees and channel partners based on a set of actions the client believed would elevate the quality of the customer experience. These actions included: 1) participating in a site-specific customer survey to identify site strengths and areas for improvement; 2) site managers reviewing the survey results with a territory manager; 3) providing sites with a best practices toolkit for managers so they could see what had previously worked well at other sites to solve similar problems; 4) managers developing a site-specific action plan to address concerns customers had expressed (as well as building upon strengths customers noted); 5) having the action plan reviewed and approved by their territory manager and their regional manager; 6) implementing the action plan; and 7) demonstrating to the territory manager the action plan had been implemented. The territory manager was compensated on the number of retail sites that completed the seven steps.
The above process contained no element to reward scores as it was based on the assumption that all a site manager can do to improve business results is to implement actions that make sense to all those in the organization – based on customer-identified issues to be addressed. The motivation for influencing scores disappeared and the focus was directed toward defining, implementing and completing actions.
Demonstrating that actions were taken was relatively easy. One site that had been criticized for a lack of parking produced a signed agreement from a neighboring business indicating space had been leased from the neighbor so the client’s employees could park there – opening up more on-site spaces for customers. Another site had been criticized for having a congested driveway, making it difficult to enter and leave the site, which resulted in customers eyeing the congestion level before deciding whether to enter at all. Follow-up discussions with a few customers, coupled with an on-site walk-through by the site manager and the territory manager, revealed POS placards and other physical obstructions that could be relocated to ease traffic flow. Before-and-after photographs demonstrated that these changes improved traffic layout and the number of vehicles in the driveway.
The chart in Figure 1 shows stunning results from the above-mentioned project. It shows the difference in year-to-year sales (before and after the program’s activities) between those sites that participated fully in the program and those that did not. Participating sites showed a 20-point sales turnaround gain compared to a 12-point gain for nonparticipating sites. Given the controls used, and the fact that many thousands of sites were involved in the analysis, one can conclude that much, if not all, of the eight-point difference is attributable to the actual design and implementation of the plans required to reap the compensation benefit.
Two unanticipated positive outcomes resulted from this practice of rewarding completed customer service improvement plans. First, employee turnover rates at the participating sites actually decreased by three points while increasing by five points for nonparticipants – an overall difference of eight points. And, second, participating sites experienced a 22 percent gain in customer satisfaction mean scores on the following year’s survey – compared to a 1 percent decline among nonparticipants. As there was no compensation tied to scores, there is little reason to believe participating sites used devious tactics to obtain higher scores. Rather, the positive actions that were implemented drove these higher satisfaction levels.
The successful practice described above is similar to the management by objectives practice that organizations have used in varying degrees over the years. Rewarding actual behavior is key – not scores. It is odd that so few organizations are willing to move in this direction with regard to customer experience measurement.
Another interesting positive consequence of not compensating people based on satisfaction scores is that one can now actually instruct employees and channel partners to ask customers at the end of a transaction if there is anything else that can be done to maximize their satisfaction. Asking such a question is no longer seen as an attempt to influence a score but, rather, as a sincere, immediate, positive effort on the employee’s part to be certain everything is okay.
Return to the more reasonable approach
Since the advent of customer satisfaction surveys, the most common criticism has been that they do not obtain enough detailed information to drive action because the specific actions needed are not clear from the survey results. Once the one-question approach is abandoned, organizations are free to return to the more reasonable approach where an overall satisfaction question of some type is asked as well as asking about broad, general categories of the customer experience. Things such as: purchase process; ordering and delivery processes; product/service performance; invoicing; and post-purchase customer service.
The survey should also ask more detailed satisfaction ratings about the elements within the broad experiences listed above. So, for example, there may be questions that attain satisfaction ratings about various aspects of the purchase process such as:
- how well the sales representative understood the customer’s needs;
- how knowledgeable the sales representative was about options to meet those needs;
- the pricing options available and the clarity of explaining them; and
- the speed of response to questions the potential customer may have had.
The overall logic of this survey type starts at a very high level (e.g., “overall satisfaction”) and works its way down to a somewhat more specific level (satisfaction with categories of the customer’s experience), moving to a yet more specific level (inquiring about elements within those categories of experience). Rarely do customer satisfaction surveys obtain more detail than this, though some attempt to do so with limited open-ended questioning.
The companies that derive the most benefit from customer satisfaction surveys gain this additional benefit by using the survey results to drive follow-up interaction with their customers to identify:
- exactly what the survey data mean;
- the impacts on customers of uncovered issues;
- customer suggestions for improvements;
- knowledge-sharing as to what industry competitors have done to address expressed concerns; and
- changes coming to customer environments that will influence these issues and ways to get ahead of those changes.
Brock White capitalizes on its customer satisfaction feedback in just this way. Over 20 years ago, Brock White formed its Contractors Advisory Council (CAC), consisting of about 15 contractor customers who meet each year for a four-hour session to discuss survey feedback and other related issues. Contractors are invited to participate for two years on an overlapping basis and new contractors are added as two-year terms expire. This practice ensures new and varying types of input. The fact that customers arrive for various parts of a two-to-three-day period presents a relaxed and open environment and leads to opportunities to discuss the business relationship outside of the formal four-hour session.
At the four-hour session, Brock White shares its customer satisfaction results and contractors break into small groups to discuss those results. They present the results of their discussion to the reassembled entire CAC for further discussion. The results are impressive in terms of depth of knowledge obtained and clear direction as to what Brock White must do to maximize how well it meets its customers’ needs. Two examples will illustrate this.
One customer survey showed lower than desired scores with the general “ordering and delivery” category and the somewhat more specific elements of that category: “the amount of time it takes to receive orders [delivered by truck]” and “the extent to which deliveries contain the products and materials you ordered.”
The CAC discussion revealed that the reason some contractors were disappointed had nothing to do with lead times or missing delivery dates but with the fact that some deliveries were made to the wrong area within a job site, so that those who were awaiting them did not know the delivery had been made. The consequence to contractors was a work slowdown that added expense to their job. They suggested more clarity in the communication between themselves and Brock White to resolve this issue and indicated exactly how to make those clarifications to ensure truck drivers know precisely where on the job site deliveries are to be made and how to notify them when those deliveries have been made.
There would have been no way to get to the successful resolution of that problem from only the survey results. Similarly, the discussion about why the ratings of “the extent to which deliveries contain the products and materials you ordered” were suboptimal revealed that 1) in some cases customers had not understood that a portion of the original order had been backordered and would arrive later; and 2) paperwork that accompanied deliveries was not always formatted in a way that made it easy to compare to the original order forms and invoices. CAC members pointed out the exact areas of confusion and steps for how to clarify in the future – along with the insightful feedback to Brock White that when reconciliation was difficult, invoice payment was delayed. Armed with this knowledge, Brock White was easily able to see how to resolve this issue and did so quickly – to everyone’s satisfaction.
Thus, collecting customer satisfaction feedback allows organizations to drive directed follow-up discussions so the exact nature of existing issues is revealed along with specific, customer-endorsed solutions for address-ing them. There simply is no way a survey itself can go to that level of detail given the need to cover a wide array of topics within a time frame acceptable to survey respondents.
Free to abandon counterproductive practices
Once organizations realize that customer satisfaction scores do not predict financial outcomes, they are free to abandon certain counterproductive practices tied to that belief. And, by acknowledging the truth of the other two assertions above related to customer experience, they are able to take several immediate steps that are most likely to positively affect business results. They can: remove from the balanced scorecard single numbers obtained via customer surveys; stop compensating employees on customer survey scores; ask enough questions (at varying levels of detail) in customer surveys to provide a basis for digging deeper in follow-up meetings with customers; establish customer advisory councils, or maximize the utility of existing ones, to review survey results and help pinpoint important improvements; and, once the issues and opportunities have been clarified by these follow-up discussions, implement a program whereby employees and channel partners are rewarded for designing and implementing approved quality improvement programs.