Leveraging Kano analysis and creative insights to drive actionable market research results

Editor’s note: Lizzy Munro is a quantitative market researcher at Asana, an enterprise work management platform. 

I love quantitative market research because I get to balance the creativity of finding insights and weaving a story with the rigor of empirical statistical methods. In a recent project using the Kano (pronounced “KAH-no") methodology, I had to get creative when faced with undifferentiated data.   

Starting point – quantitative research

As part of an in-house quantitative market research team, I work with stakeholders from across marketing and research and development teams. They have a tendency to discover a framework and then suddenly all of our research has to adhere to this (jobs to be done, product-market fit, category entry points). While a preexisting framework or methodology is often a helpful starting point, sticking to it too rigidly can do a disservice to the goals you’re trying to achieve. 

So it was with Kano analysis. My team first used this methodology two years ago as a glorified way to prioritize new feature ideas. By asking two questions about a set of features or ideas and then combining the responses in different formulas, Kano allows you to categorize ideas based on how important they are to your audience. Typical categories include must have, performance, attractive and indifferent – although these can vary. 

We used two questions:

  • Given the price provided, how do you feel about a collaborative work management tool that offers this feature? (I like it, I expect it, I am neutral, I can tolerate it, I dislike it)
  • Given the price provided, how do you feel about a collaborative work management tool that does not offer this feature? (I like it not being available, I expect it to not be available, I am neutral to it not being available, I can tolerate it not being available, I dislike it not being available)

The top ideas ended up on our road map and are now in market. 

Creative assessment

Enough time had passed that we had a ton of new ideas and wanted to repeat the exercise. But when I got the data back my prioritization rules from the previous wave didn’t work. My “must have” percentages were almost identical across all ideas. How could I confidently tell the product teams what was most important to build next?  

“Dull” data wasn’t new to me. I look after our quarterly customer satisfaction tracker and every time I’m amazed at how much Asana customers love Asana. I’m used to digging around to find the pockets of concerns and interesting nuggets I can share with the business. 

But this was something else. The prior report had set a precedent that this analysis would inform our road map for the next two years. And if I was going to fight for us to focus on some ideas over others, I needed data I believed in. 

So, I got to work assessing my data with fresh eyes: 

  • I revisited my data cleaning in case poor responses were distorting the picture:
    • Checked again for straightlining
    • Looked for inconsistent answers (for example, those who claimed to be Managers but then at a later question said they didn’t manage any people)
    • Reread hundreds of open-end responses for evidence of fraud  
  • I revised my category definitions: 
    • From using statistical significance to determine the categorization, to comparisons against the mean
    • From using the Kano questions in combination to looking at the statements in isolation (i.e., just “I like it” or just “I expect it”)
  • I tried different category permutations:
    • “Must have” and “performance”
    • “Must have,” “performance” and “attractive”
    • Just “attractive”

Ultimately, I couldn’t solve this alone. 

Creative analysis

I got together with my market research colleagues and we brainstormed ways to pull out the differentiation that we knew was in there. We landed on a scoring model that used the Kano categorization to create a score, and then we used the “must have” (highest) allocation to plot the ideas on a 2x2.  

How the two Kano survey questions were used to categorize and then calculate a weighted average Kano score and then transformed into a 2x2 with unique labels for each quadrant.

Image description: How the two Kano survey questions were used to categorize and then calculate a weighted average Kano score and then transformed into a 2x2 with unique labels for each quadrant.  

The score allowed us to give more weight to the extremes – “must have” and “reverse” – considering how polarizing a feature was. It downplayed the importance of the middle of the distribution without distorting the underlying patterns in the data.   

The scoring also took into account a wider range of Kano responses – both wanting a feature because it was delightful and expecting a feature because it was necessary. This meant the recommendations considered the breadth of the need, not just the acuity. This partly addressed some of the concerns that have been raised about Kano – that answer options aren’t mutually exclusive: you could both “like” and “expect” a feature. (See this article by Chris Chapman for a discussion of Kano weaknesses.) 

Using the “must have” as the other axis in the 2x2 introduced a measure of "essentiality." The goal was to provide high confidence recommendations to my stakeholders, and being able to say a feature was considered “must have” by our target audience was highly compelling.  

The 2x2 was also visually satisfying. There were answers in every quadrant, which validated that there was more to the story than the “dull” data I initially encountered. I labeled the quadrants with what they represented, and this formed the basis of my recommendations to the product teams: “invest” in the ideas in the top right, “deprioritize” the ideas in the bottom left, consider the breadth of appeal of “delighter” ideas and prioritize accordingly. 

Importantly, the recommendations also passed the test of seeming obvious once you knew. There were consistent themes in the investment areas, and I could explain why a feature was or wasn’t important to a subgroup based on their profile and other survey answers. 

How the Kano results looked on a 2x2, with labels for “invest,” “delighter – wide appeal,” “delighter – niche appeal” and “deprioritize.”

Image description: How the Kano results looked on a 2x2, with labels for “invest,” “delighter – wide appeal,” “delighter – niche appeal” and “deprioritize.” 

I had been concerned about how to explain this pivot and the calculations to my stakeholders. But they were far more interested in the implications for them than the technical process. The “how” was less important – but knowing that this was Kano still reassured those who valued this method. 

Creative application

The report was landing in planning season – a time when reports replicate like rabbits. I worried it would get lost among all the other insights data our product teams had access to. Luckily, I wasn’t the only person thinking this. 

Our product marketing team (PMM) had created a “voice of business" (VOB) perspective on what we needed to do to win our category – essentially a battle plan for industry domination.

Our user experience research team (UXR) was creating a complementary “voice of customer” (VOC) perspective – making sure we didn’t lose sight of the true battle heroes (our users). This contextualized the nine items in the VOB with customer evidence and customer pains. Together the VOB/VOC would keep our product and go-to-market (GTM) strategies aligned on success. Yes, we are an organization that loves acronyms.

With a little creative license, my Kano results could slot right in. I themed all the ideas we had tested to align with the VOB/VOC items. I then used the ideas that performed well according to the 2x2 (“invest” in the top right) to augment each VOB/VOC item with specific product needs we should address. 

This had the delightful upside (I think!?) of distilling all my hard-earned analysis into a catchy Kano jingle. When stakeholders asked, “What’s this VOC recommendation based on?” dropping a casual “Kano” gave it instant, rock-star-level credibility. 

Actionable, story-driven insights 

Working on this project reinforced that what we do isn’t black and white – even if the data is. Frameworks can inspire without dictating. Next time, I might skip Kano altogether in favor of a MaxDiff that directly trades ideas off against each other.  

Remember, teamwork will elevate your output. And adapting your insights to fit in other deliverables can increase the audience you reach.