Better experience, better profits
Editor's note: John Goodman is vice chairman, Customer Care Measurement & Consulting, Alexandria, Va. This article is adapted with permission from a chapter in his book “Strategic Customer Service.”
Even at their best, most customer experience (CE) surveys are not especially effective in providing actionable results. They take the customer’s pulse and provide a score but surveys often fail to guide the development of a better CE and move the proverbial needle. In short, many companies today appear to be spending much and getting comparatively little in return for their CE survey investments.
That’s a shame. Ineffectual CE surveys are not a fait accompli. Done right, they offer rich, meaningful CE insights as well as serving as a trustworthy barometer of corporate well-being. Used correctly, such survey results offer a powerful and practical management tool for shaping and nurturing CE investments. Taken on with more than good intentions, rather than a desire to confirm the status quo, they facilitate the uncovering and implementation of the “right” actions – those at the intersection of a better CE and a more profitable company.
The following are 10 categorical best practices that generally apply to all CE survey types. Some of these best practices relate to survey methodology and others pertain to how the survey results are analyzed and packaged.
1. Prepare the internal audience for constructive bad news. One way to lose your audience is to unpleasantly surprise them with data that they find counterintuitive to their own experience or that is threatening. The following are critical to properly setting the audience’s expectations in advance:
- Stress that research often produces counterintuitive surprises and will surface some unhappy customers.
- Show that negative results highlight the causes of price sensitivity which, when identified, can be used to facilitate better margins.
- Couple each negative result with a quantification of the upside revenue opportunity.
- Assume that the findings will focus on process issues instead of affixing blame to a particular unit.
2. Use a sample that yields precise and representative data. No survey practice is more tied to the proverbial warning “garbage in, garbage out” than sampling technique. Regardless of the survey type, rigorous sampling – picking enough of the right customers to participate in the survey – is essential to producing trustworthy data. Any sample must be built to fulfill two prerequisites to produce trustworthy data. First, the sample must be constructed to yield statistically precise results. Second, the sample should be developed to produce representative data.
The statistical precision of a survey finding has to do with margin of error and statistical confidence, which are mostly a function of customer sample size and the response rate. Think of it this way: All else being equal, the more data points (i.e., survey respondents), the greater the precision of the survey results. Plenty of academic and practical resources are available to guide a determination of responsible, scientific sampling technique. The goal is to strive for a CE survey data set that yields no more than a ±5 to 7 percent margin of error, with a 95 percent confidence level. The best way to achieve this minimum precision level is to forecast and plan for this target.
The statistical representativeness of a survey concerns the degree to which the customers responding match the customer population on some key set of characteristics (e.g., the products or services they have purchased, gender, income, etc.). While statistical precision and representativeness can be related, they are not identical. For example, a company could possess a precise data set of 1,000 survey respondents that is not representative of all customers (e.g., includes only men, is skewed toward customers only using one type of product or service, etc.). We typically perform a congruence test on any data set to determine whether the survey respondents are representative of the customer population. For example, if Product A is used as the criterion characteristic, we compare the match between the percentage of customers in the population using Product A and the percentage of customers using Product A in our CE survey data set. This congruence test will be repeated for a variety of indicator characteristics. The representativeness of the data set is based on the relative difference between these two percentages across the chosen set of variables. The closer the average difference is to zero, the greater the probability that the sample is representative of the customer population.
While there are no guarantees of obtaining a representative data set by chance, a few methods can increase the probability of securing one. First, before fielding the survey, the CE practitioner should validate that the sample itself is generally representative of the customer population. Second, any gross discrepancies between the sample and the population, as well as any underrepresentation of particular segments (both for the purposes of representativeness and any segment-specific analysis), should be addressed as needed, by using a supplemental sample (i.e., by adding more participants from certain segments to the sample).
3. Design a CE survey that identifies the key drivers of satisfaction and, at a granular level, customer problems. The key drivers of CE satisfaction are critical to setting CE priorities. At the same time, the best way to complement a more prescriptive understanding of the CE priorities is to granularly describe the associated types of customer problems. For example, if the CE key driver was “product quality,” what types of customer problems diminish product quality? The CE survey best practice here is to ask the customer to review a list of potential problems across the entire customer journey. The problem list should contain 20 to 50 problems. While such a list may look intimidating and negative to executives (especially marketing), aiding the customer usually identifies three times as many problems, compared to simply asking the customer, “Have you had any recent problems?” Further, the list can include critical issues many customers are afraid to mention, such as “being misled by the sales representative.” In this instance, once the issue is on the list, the customer has permission to flag the problem.
4. Use a survey invitation that convinces customers to invest their time to respond. Regardless of the survey methodology, an invitation to participate in a CE survey should provide two specific examples of how the company used survey feedback to improve the CE. For example, a delivery company we once worked with indicated that it was enhancing its claims and invoicing processes based on customer feedback. A quick-serve restaurant we collaborated with announced it had brought back BBQ sauce, as customers requested. In both cases, customer feedback was strong and positive and served as a catalyst for enhancing survey response rates.
5. Use customer-convenient survey channels. Web-based surveys that customers can take at their convenience are more effective than telephone-based surveys or forcing customers to respond immediately after calling customer service with a question or problem. For B2B relationship surveys, schedule an appointment with the customer, send the questionnaire in advance and enlist an account manager to follow up and encourage non-respondents to participate in the survey. These techniques can yield B2B survey response rates between 50 and 95 percent.
6. Package the survey results for ease of use by executives. CE survey results should be tailored to each audience and describe the top issues in no more than one to two pages. Complicated data tables that require study and analysis (e.g., top 10 complaints by top 15 products, giving the reader 150 data points to analyze) are a barrier to consumption of the results. When using data tables and graphs, proactively conduct the analysis for the reader and list the four key problems that most need attention. For maximum impact, estimate the monthly cost of inaction for each key issue and provide a suggested action plan with process metrics to measure impact.
7. Present data in a positive tone and with creative ideas. While we noted in the first best practice that the CE survey audience should always be prepared for constructive bad news, the survey results should strive for balance and also highlight positive accomplishments. For example, point out where previous initiatives had a positive impact or show how a process metric has improved. Blame should not be assigned to individual units but dissatisfaction and its accompanying financial opportunity can be associated with particular cross-functional processes. Communicate to the operating manager, “You are doing well but look how much more money you are leaving on the table that you would accrue if you did X.” By nature, processes are cross-functional and therefore less threatening. Also, if you add creative ideas suggested by customers and your customer service representatives, the report is repositioned as an idea source. One company’s customer service department had a section of the satisfaction tracking report titled “The Wacky Ideas Section.” The marketing, brand and product development departments viewed the section as an innovation source.
8. Create an economic imperative that the CFO accepts. The monthly cost of inaction on each CE priority should be quantified, according to the market-at-risk approach, which enables you to prioritize problems for correction based on the portion of the customer base that may be lost. It considers frequency and damage as measured by impact on loyalty, increased risk and negative word of mouth. The market-at-risk methodology and the customer value should be validated in advance with the CFO or the resident financial cynic. Remember that CFO buy-in significantly increases the VOC impact on customer satisfaction improvement.
9. Present the data at a granularity level that makes it actionable. Define CE priorities in as detailed a manner as possible. Issues such as billing or sales unresponsiveness are too broad and likely to cause defensiveness. The CE practitioner can greatly enhance the prescriptive value of the survey findings through three survey design practices: 1) offer, as previously described, an aided list of between 20 and 50 problem categories and ask respondents to indicate all problems experienced as well as the most important problem; 2) provide an analysis of a limited number of open-ended questions – e.g., in an open-ended question that follows the survey’s problem list, ask for a description of the most serious customer problem; and 3) provide an analysis of a set of special survey questions, included in the survey, to uncover insights about known priorities.
10. Measure, communicate and celebrate progress. Many VOC processes are not systematically monitored to determine whether the plans made in response to the VOC reports ever achieve the promised improvements. Lack of accountability is a serious impediment to achieving action. Creating accountability by measuring progress is one of the most important functions the CE leader can perform.
Another key part of this process is recognizing and celebrating successes and ensuring all the involved actors receive accolades. If anything, spread the glory too wide. Those who were not strong contributors on the current project will work harder next time.
Communicate the process changes made, based on customer and employee feedback, to the entire employee base. Most companies communicate via the website, e-mails and supervisor briefings. Unfortunately, many employees do not read everything and many supervisors filter and truncate communications.
Be sure to include your customers in the communications. They are excited that you are paying attention to their input. Gary Furtado, president and CEO of Navigant Credit Union in Rhode Island, relates that when he communicated the results of Navigant’s member experience survey and his intended action plan to the membership, he received over 50 e-mails congratulating him on his courage to ask about problems and his follow-through on conveying the results and action plan to all the members.