Editor’s note: Chris Benham is CMO of SurveyGizmo, a Boulder, Co.-based software company.
Most market research begins with a manager posing a question: “Do we know…?” The question often launches an all-too-familiar process. A team is gathered, a survey is created, debated, revised, debated some more and finally fielded. Once the data is collected, it’s analyzed, discussed, debated, analyzed again and debated some more. Eventually, a report is created with pie charts and graphs and a meeting is scheduled to present the findings and analysis. Sometimes the original question is even answered. Sometimes.
The reality is survey results often end up in pie charts and dashboards and never actually drive the change the original question was devised to address. How do we know this? Well, because one of our managers recently posed the question, “Do we know if survey feedback actually drives change?” (And, yes, the question precipitated the launching of an all-too-familiar process that involved creating a survey, debating it and so on.)
What we learned
The findings of the “Do surveys drive change?” survey were interesting – including the somewhat amusing fact that more than 15% of respondents claimed to use our fictitious BAPU score to measure overall improvement. What was perhaps most interesting was the apparent disconnect between the people who create, manage and interpret surveys and the senior managers who are responsible for implementing the change they highlight.
When we asked survey creators whether their work drove improvement, 48.9% said, “some improvement,” while only 17% said, “significant improvement.” Conversely, 18% of VP/C-level managers believed their surveys drove “some improvement,” while 51.2% claimed they saw “significant improvement” as a result of their survey data. Who’s right? The people who create the surveys or the people for whom the surveys are created? In the end, the answer is determined by a third group: the respondents to the survey. Did respondents feel their feedback was heard and did they see change as a result of their input?
The biggest challenge
According to most industry analysts, the biggest challenge companies have with all the customer feedback is driving action and closing the loop with respondents. And that’s because most feedback is aggregated, making it hard to take individual action.
Take for example an enterprise’s Net Promoter Score (NPS). The overall score tells a story in broad strokes. If 50% of respondents give a score of 8 or higher, that’s a good score. But what if the 10% of those who give scores of 3 or lower are the ones who buy the product versus just those that use it? Are they likely to renew? That key data, while essential to act upon, can be easily overlooked, putting a major customer relationship in jeopardy.
Creating processes that operationalize this kind of data is key to driving action and allowing respondents to be addressed on an individualized basis. By integrating feedback directly into an existing CRM, customer support ticketing and other customer management systems, companies can take immediate action on the feedback provided. This will allow companies to close the loop with individual customers instead of viewing them as an aggregated number in a pie chart.
When we first tried this approach, one of our customers said it was “kind of freaky” that we called to close the loop with him about his negative score. “I didn’t think anybody monitored this stuff,” he said. It proved two things, this approach works and that the industry has broken the contract with survey takers by not getting back to them and closing the loop. Sadly, they don’t expect their feedback to be acted on.
Feedback system
So, what can you do? There are ways to have survey responses automatically trigger follow-up on the part of a sales or support team. One car service company uses customer feedback to trigger an e-mail to the location manager to call the customer and see what they liked and didn’t like. Companies that have implemented such an operationalized feedback system – some without changing the way they work or the systems they use – have seen their NPS, customer effort score (CES) and CSAT scores climb rapidly.
Additionally, each individual score is still aggregated into the overall score. So, nothing is lost by intercepting the responses on the way to aggregation. This process has also taught us that negative reviews come from people who care enough to give their feedback. The group most at risk are the passives (those with a score of 7 or 8). They have the lowest emotional involvement with your product or service and are the most likely to switch over price or some other feature. Both promoters and detractors are still engaged; passives are not.
Intercepting surveys and taking action on them before they get aggregated is the best way to treat each customer as an individual, without forgetting the collective scores. It shows people you care about their opinion and value their input. This is only possible when you operationalize feedback by having responses trigger action within your organization.
How to operationalize feedback
- Make sure you’re collecting open-text responses in your customer feedback.
- Share the open-text responses with people who can act on them (support, sales, finance, etc.). Even if you’re only forwarding the responses in e-mails or spreadsheets, you’re creating the opportunity for the information to be acted on.
- To automate this process, work with IT to identify the applications your company uses to engage with your customers – support ticketing, CRM, etc. Next, determine if you can integrate your survey or feedback responses into them to create individualized alerts sparked by the open-text comments or ratings. Finally, work with the teams that own customer engagements to make sure they’re aware these new alerts will be coming to them so they can take the appropriate action.
- Don’t let feedback disappear into a pie chart or dashboard. Make “kind of freaky” the response you get when you don’t close the loop with your customer, not the other way around.
Operationalizing feedback to drive action is at the forefront of the evolution of feedback. Historically we gathered feedback to drive insight and then action – by operationalizing feedback, we can drive action first. With any luck, we’ll all replace our current process that fails to answer the original question with one that not only answers the question but also drives action to address it.