Editor’s note: Terry Vavra and Doug Pruden are partners at research firm Customer Experience Partners. Vavra is based in Allendale, N.J. Pruden is based in Darien, Conn. This is an edited version of a post that originally appeared here under the title, “7 ways to boost NPS scores – that we hope are never used!”
It’s hard to find a company today that collects customer feedback without including the Net Promoter question, “How likely is it that you would recommend [company] to a friend or colleague?”
It’s a perfectly reasonable question. But in too many cases it prompts a chase for better scores rather than a process to capture information promoting improvements to deliver a better customer experience. A seemingly insurmountable primal urge is triggered by NPS questions. Intentionally or unintentionally we see organizations attempting to fix the score rather than fix the store by applying some of the following tactics.
Seven malignant practices
1. Begging: A store clerk or online service representative advises the customer that they will soon be receiving a satisfaction survey and then pressuring the customer to give them a 9 or 10 because anything less is “considered by management as failure that could cost them [their job, their bonus, their future promotion].”
2. Scale order: Low to high scales are most conventional in our society, and Reichheld and Satmetrix originally built Net Promoter on a 0-10 (a not at all likely to extremely likely) scale. But for any number of reasons some organizations use a reversed 10-0 (extremely likely to not at all likely) scale on their NPS question. That seemingly subtle twist has been documented to provide a lift in scores!
3. Event timing: While some satisfaction survey processes are conducted on an ongoing basis, others are fielded on a quarterly or even on an annual basis. For those using periodic fielding, timing can become a major influencer of NPS scores. Choosing to conduct a survey in the wake of the announcement of new customer benefits or a price decrease (or conversely delaying the execution of a study following a price increase or PR disaster) can artificially inflate NPS scores.
4. Selected audience: We’ve all heard the excuses:
- “We can’t seek feedback from that group of customers – they’re too important to bother.”
- “We need to find a way to eliminate X group of customers from the survey process – they were involved in the product recall.”
- In B2B environments, the most egregious approach, “Let’s let the sales force comb through the list just in case there are some sensitive cases.”
Eliminating any specific group of customers from a satisfaction survey obviously will impact the collected NPS scores. But it’s not likely to improve customer retention or improve word of mouth for the brand!
5. Visual cues: The whole idea of using 10- and 11-point scales for satisfaction and NPS questions is to allow customers to broadly express their feelings about their customer experience. When brands use different colors and smiley faces along with the scales, they basically collapse an 11-point scale into a 3- or 5-point scale and provide suggestions for how customers should respond.
6. Providing guidance: As researchers we strive to gather unbiased, top-of-mind reactions with our questions. Yet, some brands intentionally or unintentionally use question order to telegraph to customers the criteria that they should use in responding to overall satisfaction and NPS questions. Other brands provide additional communication within their questionnaires to explain how scores will be interpreted (“0-6 categorizes you as a detractor;” “9-10 categorizes you as a promoter”), which may not generate more promoter scores but likely pushes more customers with less than positive experiences into the middle, passives range.
7. Eliminating influencers: B2B marketers face the challenge of how to score multiple responses from within the same customer company and how to handle NPS question responses from decision-makers as opposed to influencers or gate-keepers. It’s unlikely that there is an answer for how to deal with these situations to produce a truly representative NPS score. This uncertainty tempts some to weight responses based on which group will deliver the most favorable NPS score. We do know that that is the wrong criterion!
Your next mentoring session
We recognize this list could be interpreted as a user’s guide for fixing scores. Our intentions are far different; we're reminding ourselves of the many pitfalls we face when trying to honestly profile our customers. But we all need, from time to time, a gentle reminder to focus on setting the right priorities; continuously improving our processes and products; improving our training practices; and, in general, making the right improvements to our customers’ experiences. We hope this list serves that purpose.