Editor’s note: Dennis Murphy is vice president of the technology practice at Directions Research, Cincinnati. Chris Goodwin is a vice president at Directions Research. The next installments will appear in the August and October issues.
This is the first installment of a three-part look at customer satisfaction. Part two, in the August issue, will focus on the expectations we should establish for customer satisfaction. Part three, in October, will examine ways to execute a revamped strategy.
Obituary
February 29, 2011
With little notice and less fanfare, the discipline of Customer Satisfaction was laid to rest today. Known as CSat by friend and foe alike, mourners expressed more relief than regret at CSat’s passing.
Nonetheless there is some cause for melancholy. CSat’s era had begun mid-twentieth century with high hopes and a simple promise: to assist organizations in better pleasing their customers. Had CSat remained true to this course, perhaps today he would be alive, healthy and celebrated.
CSat entered the world as bright-eyed and aspirational as any discipline yet departed without fanfare. What were his sins? What paths misleading? What theories vapid? What applications misguided? Were there associate villains? Who? Are we dealing with evil or stupidity? (Frankly, does it matter?) A journey through the troubled times and perilous pitfalls of CSat can’t help but be illuminating for whomever walks in his fading footsteps.
Fellow discipline Brand Tracking’s eulogy summed it up all too well.
“Never has a market intelligence discipline been so warmly welcomed. After all, the notion that pleasing customers is a good business practice hardly evoked controversy. None of us ever questioned CSat’s intentions. If only his passion for enabling greater customer satisfaction could have extended to the execution of customer satisfaction work, how different things might have been. But miscues upon misconceptions, failed promises upon false expectations, and delusion upon illusion clouded his last days.”
While the above obituary may be tongue-in-cheek, the premise is not. Customer satisfaction has outlived its usefulness - at least in its current incarnation. That’s a shame, given its profoundly valuable charter “to identify and resolve customer issues.” Born to replace the “squeaky wheel” approach to solving customer problems, CSat promised a more disciplined process for managing customer issues. If that had remained its focus, this article would not have been written. Instead, CSat moved quickly beyond its initial charter into areas and in ways that not only proved ineffective, but also undermined CSat’s ability to deliver on its core promise.
Small leap
The problems began with a small leap: from managing customer issues to enhanced analysis (i.e., understanding what the most common customer problems were). But that leap was made without first thinking all the way through the issues and consequences of that enhanced analysis. Sadly, the evolution of customer satisfaction became a devolution through these stages: insubstantial theory; haphazard execution; measurement confiscation; inappropriate application.
Insubstantial theory: Good answer seeking right question?
The advent of customer satisfaction as a fresh source of analytic information for organizations happened naturally and logically. Who wouldn’t want to explore this new resource? The indications today are, however, that early advocates weren’t pursuing tough questions about the concept itself. Here are some questions they should have been focusing on.
Can customer satisfaction drive market performance? At the very core of CSat, an unresolved foundational issue remains: can an attitude (satisfaction) powerfully and measurably drive behavior (sales)? For starters:
Who? We bequeath a kind of universal respect to CSat which it sometimes deserves and at other times is rather questionable. What good is it doing me if my least-profitable customers are delivering my prime satisfaction scores? Is satisfaction a useful metric in arenas where the customer has little or no choice latitude?
What? From inception, customer satisfaction has suffered not from too few measures but from too many. Unable to determine that any one measure better predicts - or even predicts at all - market performance, we’ve been accosted by a battery of measures: overall satisfaction, recommendation or repurchase, to mention a few. When frustrated by the individual queries we then create all kinds of composites like the venerable 3M algorithm which sums all three, yet still rests upon correlation where causality is begged.
The issues addressed, as well as those implied, have few simple answers. One initial assumption, however, was so fundamental that it never even reached question status - but will now:
Does a rising score alone imply rising performance? A basic tenet of customer satisfaction from inception has been “bigger is better.” More often than not, however, measures are self-referential (our score this year versus our score last year) and the only thing that matters is generating an annual increase. So consider: “Company X gave you a seven last year and an eight this year. That’s good, right? Not if your primary competitor advanced from a six to a nine!”
A competitive context is requisite for truly assessing customer satisfaction. Unfortunately, it’s not a trivial assignment. Being difficult, however, does not obviate the fact that it’s necessary.
Scientific process generally poses a question and then gathers data. Customer satisfaction was not born of theory but spawned from a data resource. It’s little wonder then that a nomadic discipline ensued.
Haphazard execution: More rigor or rigor mortis?
Because customer satisfaction was so readily accepted - its face validity was practically unquestioned - the usual thorough effort of exploring methodological issues often happened after the fact, if at all. Here are some of the key questions that often were glanced over:
Questionnaire design. How much does the exact wording of the question matter? What is the proper scale? How should that scale be developed? What questions should be added? Does the questionnaire length matter? Does placement in the questionnaire matter?
Driver analysis. How do we determine what the drivers are? Is it derived, self-explicated and/or correlated with variables internal or external to the CSat study?
Comparative analysis. How do we compare responses from choosers of different brands? What do we do about the different types or segments of choosers of other brands?
We must consider how we design CSat questionnaires. Is CSat a serious discipline with a rigorous methodology or just a popular measure whose effectiveness changes with the whim of the question writers and survey designers? We’ve all seen how subtle wording changes can affect the outcome of a question. Questionnaire placement also matters. Cases-in-point:
We recently conducted a client’s ongoing tracking survey and its annual customer satisfaction study. Using the same sample and executing the survey at almost the same time, we received higher satisfaction scores in the tracking study, where the CSat question was halfway through the survey, than in the satisfaction study, where the main CSat question was the first asked after the screener. Further, by the time we turned it into a net satisfaction score (a topic broached later in this article), the difference was magnified.
In another case, the client changed the polarity of the response scale in an ongoing tracker (for complicated reasons that were not related to the execution of this particular study), and scores dropped by several points.
What do those differences in outcomes mean for the ability of customer satisfaction studies to precisely capture the reality of customer satisfaction or dissatisfaction? How much faith should be put in comparisons with other available customer satisfaction scores that could be gathered in even slightly different ways?
Another problem is the tendency to wedge too many questions into CSat studies. CSat proponents believe that nothing is too minor to measure or to matter - a belief eagerly embraced by the client side. “Never miss an opportunity to collect more information” is often the battle cry. “You never know what might turn out to be important.”
Thus, we typically subject a once-agreeable respondent to a disagreeably long list of questions, about which they likely have little or no interest. Increasingly, the rational choice for even the most helpful respondent is to decline. Only some kind of remuneration saves the day, but even then a completed survey doesn’t promise a thoughtfully-completed survey. Remuneration risks changing the nature of a satisfaction survey and it doesn’t preclude the customer from feeling exhausted and abused.
Our statistical methods are getting better and better, but are still limited to the questions we ask and the dependent variables we have available. Are whatever factors that increase CSat scores the criteria for what’s important? What if it’s unrelated to profits or revenues? Of course, linking survey data to actual customer databases has big implications for confidentiality and how we do surveys. Without the linkage, though, validation is nearly impossible. A good dependent variable - a surrogate for sales - is essential for sound driver analysis.
Finally, the integration of competitor information raises new thorny issues. How do we validly compare CSat scores across companies, if we’ve had the foresight to collect competitive data? Can the loyalists of one brand be compared to those of another brand, or is brand choice a de facto segmentation variable rendering line-item comparisons irrelevant? This point can be illustrated by contrasting users of Windows-based PCs with Mac owners. There is no question that the latter is the “ease of use” winner, but there is substantial evidence that ease of use is less important to Windows choosers, thereby reducing the comparability between CSat scores which are likely driven by different variables.
This de facto segmentation problem derails other analyses. Addressing large negative gaps in satisfaction drivers might be erroneous if your consumers have already decided that they “like you in spite of” that shortcoming or if it’s not a shortcoming that matters to them much.
Measurement confiscation: Customer satisfaction or self-satisfaction?
Over the past decade, the raison d’être for CSat has quietly morphed from correcting customer problems to measuring company performance.
Various reasons can be cited for this transformation - metrics-driven executives, performance-based personnel systems, etc. - and the evidence for this transformation lies in the ubiquitous scorecards and dashboards lining corporate boardrooms. Kudos to the scorecards where forward-looking strategy and research dictated the measures required, versus the plethora of scorecards retrofitting whatever data was available. The latter metrics are often as relevant as a map of France for exploring the moon.
In one instance, a company one of the authors once worked for was required to measure changes in market share as part of the executive scorecard. In the firm’s printer division, the executives, through a series of tortured maneuvers akin to a Rube Goldberg device, actually managed to exclude market leader Hewlett-Packard’s printer business from the relevant set of competitors!
In this revised orientation, surveys are aligned with what the client wants to hear far more than what the customer might have to say. A hotel chain frequented by one of the authors seems fixated on knowing if I was greeted by name when I arrived, but it has not once asked if I was happy with their upgrade policy. It seems the desk clerk’s performance evaluation trumps the client’s satisfaction.
Most researchers are painfully aware that management teams often believe information grows on trees. They are, sadly, oblivious to such concerns as sample availability, projectability, budget constraints, respondent cooperation, respondent endurance, etc. And when we speak of confidence levels and the statistical limitations of non-parametric statistics, they, like Elvis, have already left the building. Consider this all-too-common design decision:
In more than one study we’ve designed, we’ve been asked to expand the scope of the study beyond the top segments to a secondary segment with a miniscule incidence. In these cases, the trade-off is often made to keep the cost the same by drastically reducing sample in the top segments. The end result almost always has been a study where only the broadest comparisons are available, and with none of the targeted executives being happy, including the ones who had requested the change.
Net satisfaction scores are another great example where executive edict trumps thoughtful design. The net satisfaction score was devised as a “data reduction” process, and let us be clear: it is more the application of the net sat than the net sat itself that we’re attacking. Senior managers seldom have the time, or the inclination, to focus in-depth on anything other than the profit/loss statements and earnings-per-share. So when we set targets for this non-parametric statistic and do so without ample sample, we’re treading on dangerous ground.
It is not uncommon to see net satisfaction score goals set that are within the margin of error for the mean score. In other words, goals are set that can be “achieved” or “not achieved” just due to the effects of random sampling. One further comment on net satisfaction: the score extracts one CSat element and attempts to explain success with that variable alone. This is tantamount to predicting house value using only square footage. Yes, it will correlate but why would anyone in their right mind forfeit more precision by dismissing all other available data and settle for a single predictor? You may be required to provide this but it’s your responsibility to point out that data reduction is also data destruction.
Inappropriate application: Who gave satisfaction a bad name?
The politics of customer satisfaction have increased the demand for data reduction. Without easy proof of the relationship between satisfaction and revenue, organizations began seeking other applications. The founding principle of “pleasing the customer” has now been replaced with “pleasing the organization.”
Here are a few of the final thrusts to CSat’s integrity in increasing order of severity. Once used for identifying and correcting customer issues, CSat has become a marketing and performance tool. Organizations, including the aforementioned hotel chain, no longer ask how they did but rather lobby customers with “Give us a 10!” Ineffectual, inappropriate and insulting!
What was once intended to be useful data, employable to aid the customer and thereby better the organization, now largely bypasses the customer and panders exclusively to the organization.
These first examples may be distasteful but the following are even more distressing:
A pioneer of the discipline has long employed some suspect practices. Seeking entry into an organization one of the authors formerly worked at, the firm acquired a convenience sample of advocates from marketing brethren which resulted - wonder of wonders - in our niche convenience sample outscoring the competitors’ market sample. Over my protestations regarding the validity of the data, the results were purchased by my colleagues and subsequently used to promote as “best of class” a product with which the market was largely unfamiliar.
A final indignity for CSat originally appeared as recognition of CSat’s value. Not only had CSat earned a seat at the “corporate scorecard table,” CSat was now invited into the boardroom as a compensation schematic. The pretense that CSat is about the customer had fallen; it was now about your bonus. In this new clime, the focus shifted from customer satisfaction to employee satisfaction.
As CSat has become more and more of a performance and pay measure, the politics surrounding it have grown as well. The failure of management to grapple with sample integrity leaves truck-size openings for tampering. Here are just a few of the shenanigans we’ve seen:
• Contact information for customer satisfaction interviews is pulled from one or many customer databases. However, sales or other executives have the ability to either exclude “sensitive” customers or those customers who haven’t bought anything recently. Never mind asking why those customers haven’t bought anything recently.
• Some executives have latitude in “re-coding” accounts so that those accounts fall outside of the group of customers they get measured on, and into someone less fortunate’s bailiwick.
• Rehearsing customers who will likely be contacted: “Give me a 10!”
• In one case, the bonus pool was dependent upon the organization maintaining a specified market share. Even though revenues decreased, this goal was easily attained by eliminating poor-performing categories from the “served market space” of the organization.
There are a lot of smart rats out there who know which tunnel the cheese is in. It’s not so hard, however, to understand, if not actually empathize, with this kind of behavior. With the imprecision of the metrics such as net satisfaction, indefensibly small and volatile sample upon which most people are measured, and the money at stake, such behavior is hardly aberrant. The real naiveté lies with senior management. They subscribe to the belief that incorporating customer satisfaction into performance measurement is virtuous. Most evidence points to it being deleterious.
This change in how organizations use CSat has eroded the standing of market intelligence groups in most companies. With so much at stake, disputes routinely break out between the measurers of CSat and the measured. As we have shown, there are many grounds for questioning CSat measures, but these disputes with executives mostly focus on the execution of CSat studies in an attempt to improve their scores; no one ever attacks a CSat study when scores are going up. This role alone has swung the relationship pendulum between market intelligence and management from partner to police.
Not always worked well
It’s clear that the evolution of customer satisfaction has not always worked well for the market intelligence community or the businesses that we serve. The promise of greater profitability through increased customer satisfaction is clear, and we’ve all read multiple books that attempt to show these links. But from our experience within some really world-class organizations, we have not seen this link work well in practice. In any particular business, there are dozens of factors that can and do interfere with delivering on the CSat promise.
Despite the somber opening of this article, we do not think customer satisfaction studies should be abandoned. Instead, in the coming articles, we will argue that customer satisfaction deserves - or really, demands - a revised strategy and set of expectations. Once that is done, creative changes in how we execute this strategy can lead to real value.