Improving your odds
Editor's note: Keith Brady is vice president at Socratic Technologies Inc., a San Francisco research firm.
Most new products fail. The exact failure rate is a bit of a mystery – some studies state that 75 percent of new products fail, while others put it as high as 95 percent (the fluctuation is likely due to inclusion/exclusion of certain industries and the criteria of what constitutes success). No matter the figure, that’s a dismal outlook.
Surely, with all of the available insights data, analytical tools, techniques and methods that have been developed in recent decades, the product failure rate has declined substantially. And surely the emergence and promise of big data has taken most of the uncertainty of market movement and ambiguity of product performance out of the equation.
Nope. Sadly, the high failure rate persists.
So why do products with poor quality and poor design slip through? Why do marketers underestimate demand, incorrectly price, badly position or inadequately communicate the benefits of new products?
It’s not that the tools, methods and data (including big data) don’t work. It’s usually that the tools, methods and data aren’t used at all, are only partially applied or are applied incorrectly.
Before we dig into those reasons, exactly what tools, methods and data are we talking about?
There are five potential categories for exploration and testing of new concepts and products:
Qualitative research. Qualitative research has traditionally been the prologue to concept development and ideation. In-depth interviews and focus groups can explore how consumers define categories of products, which brands most define the category and why, as well as where there are distinct market needs and preferences that could potentially be addressed with a new entrant (or a new category).
Traditionally, qualitative research has been a face-to-face endeavor but recent years have seen a proliferation of digital approaches to qualitative, from remote video conference interviews to Web boards, with an array of traditional tools (e.g., collages or other projective techniques) replicated in an online environment.
Quantitative concept and product testing. Whether testing early concepts or refining near-ready products, there are several tools and techniques available to better understand market affinity for products/concepts or part-worth utility of specific features. Via many of the tools and techniques below, valuations (or part-worth utilities) can be used to create market simulators estimating revenue or market share for different combinations.
Conjoint analysis determines what combination of attributes, among a limited set, is most influential on a respondent’s product choice. A predefined set of potential concepts or products is shown to respondents. Respondent preferences for certain combinations of attributes are analyzed to discern the implicit valuation of the individual features/elements.
Discrete choice experiments/models statistically relate choices made by each respondent to the attributes of both the respondent and the product or concept. Resulting models estimate the probability that a person chooses a particular product based on both product features and respondent characteristics (age, gender, etc.). Unlike standard ratings or importance scales, discrete choice forces respondents to make choices between options, while still delivering rankings showing the relative importance of the items being rated.
Maximum difference scaling is a specific discrete choice method by which survey respondents are shown a set of the possible products/concepts and asked to indicate the most preferred and least preferred.
For particularly intricate products and concepts, configurator tools offer a relatively superior method of understanding the value consumers place on specific features of a product or concept. Configurators allow research participants to create their own ideal product or service bundles, choosing among possible combinations of features and service components. In some cases, known sub-component pricing is used to mimic the real-life shopping experience in which specific features add cost to a base model.
Highlighter tools provide a feedback system by which consumers can interact directly with specific features and attributes of a product or concept (as well as creative elements like advertisements, direct mail, Web sites, etc.), allowing them to indicate those which elicit particularly positive or negative sentiment. These help to identify optimization opportunities via key drivers and/or detractors.
Positioning and segmentation. Equally important to fully understanding what concept and product traits appeal to the market is understanding who they are appealing to.
Understanding key segments to target does not have to be, nor should it be, a mutually exclusive effort. Data collected through quantitative concept and product testing can be dual-purposed to understand how your products are perceived, determine to whom they should be marketed and see where products stand relative to competitive offerings.
Leveraging factor analysis, cluster analysis and both discriminant and correspondence mapping techniques yield quantifiable, actionable and trackable results.
Pricing and demand tools. Market simulators developed from quantitative concept and product testing. Market-sizing deduced from positioning and segmentation is especially powerful when paired with pricing models. The Van Westendorp price sensitivity meter, developed in the 1970s, aims to surround a market price using four price-value relational questions. At what price would you:
- consider this product/service to be so inexpensive that you would question its quality and believability;
- find this product/service to be such a good value that you would definitely buy it;
- think this product/service is starting to get expensive but would still be worth considering; or
- find this product/service to be so expensive that you would not even consider buying it?
Such an approach avoids the notoriously unreliable approach of asking respondents directly how much they would pay for a product. Results of a Van Westendorp approach can be used as inputs into market demand simulators to estimate revenue and profitability.
Specific to retail product market demand estimates, virtual retail and walk-by virtual shelf environments provide contextualization and visualization to further simulate real-world purchase decisions and to determine the role of ancillary variables like product placement in-store among the competitive set.
Line analysis and optimization. Because brand extensions make up the majority of new products introduced to the market, understanding how those extensions draw new customers and/or cannibalize existing products’ market share is essential toward developing an optimal product mix.
Total unduplicated reach and frequency (or TURF) is a traditional statistical analysis used in market research for providing estimates of market potential. Non-cannibalized, unduplicated reach analysis is a similar technique that provides TURF-like results but works better to facilitate the modeling of optimal product mix strategies.
Consumers’ purchase decisions are evaluated for possible cannibalistic substitutability among options. This helps to decide what combination of products to “shelve” together and to measure the impact of trading out one variety for another. Additionally, it indicates which products are highly cannibalistic of each other and therefore not necessary to stock in order to obtain maximum unduplicated reach.
Dive into why
Now that we have an understanding of the potential tools out there, we can dive into why they aren’t being applied or being applied correctly.
The tools, methods and data aren’t used at all
Most brand and product managers understand there is value in conducting research, so cases in which it is not employed are typically driven by timing, budget restrictions or budget allocations. Yet these same marketers often devote a healthy amount to advertising across a wide range of unproven or unmeasured media channels and formats.
Still others rely on gut instinct or intuition to guide their launch strategies, turning their backs on research which they don’t believe can be designed to reflect actual market feedback. Their hope is that past performance with other products or brands will offer some indication of future success.
Eliminating a testing or research phase is a recipe for failure at worst and is sub-optimal at best. Much in the same way that an investor would want his or her financial portfolio to be operating at the efficient frontier (maximizing return for a given level of risk), a marketer wants new product launches to maximize their potential for success given all the investment that has been made to bring the items to market. Relatively speaking, market research is a small price to pay to help optimize.
The tools, methods and data are only partially applied
This is perhaps the most common of the three scenarios. Incomplete application can refer to the partial use of necessary tools or it can refer to a complete set of necessary research conducted among only a partial representation of the relevant population.
The first case has several potential examples but one of the most common is exclusion of exploratory research before a quantitative phase is pursued. For example, using quantitative research to determine market demand for a cutting-edge technology concept not yet introduced to the consumer market. In a situation like this, where previous consumer interviews have not taken place on this particular subject, there is a real danger that the quantitative questionnaire will not use terms and descriptions familiar to and reflective of the relevant consumer target and the concept will be improperly communicated. Ideally, a qualitative phase would be built into the research timeline to ensure that this risk is assuaged.
The second case often reflects the myopic approach some marketers take with their core customer base. While it certainly makes sense to measure core customer reactions to potential product changes or enhancements and also makes sense to determine potential cannibalization of existing products from the launch of a new concept or product, it does not make sense in either situation to ignore the potential (or non-core) market. Doing so provides only partial insight into the potential opportunity or danger associated with product development or launch.
The tools, methods and data are applied incorrectly
Survey design is critical and not to be overlooked. Response biases in surveys are common-place, with unconscious misrepresentation resulting from surveys that are too long (apathy bias, most common in survey greater than 20 minutes in duration), surveys that overtly reveal the host company/brand (sponsorship bias or auspices bias), surveys which rely heavily on brand recollections (recall bias – for example, the Reeves Fallacy) or surveys with poor question or scale design (order bias or habituation).
Proper method application is also important and is often interrelated with survey design. For example, a conjoint analysis can quickly translate into a long survey or large minimum sample as the number/combinations of attributes to be explored increases. In such cases, a configurator approach would likely be more suitable.
Far too often, market attractiveness and demand for new concepts and products are gathered in the absence of any discussion of cost, price or trade-offs. This too can yield biased results, most notably an unrealistic market appetite for certain features and add-ons. After all, a product comprised of all potential features is much more attractive when it is offered at no cost.
A prominent example of data being applied incorrectly is using exclusively qualitative research results (focus group feedback or in-depth interviews of a limited audience) for market projections. While use of qualitative data for future outlook is not unheard of, particularly when dealing with executive opinions, applying the Delphi method (a panel of experts questioned individually about their perceptions of future events) or polling sales forces, there is a substantial risk in applying qualitative results to something as broad as a general consumer target.
Throw us curveballs
In closing, it’s important to recognize that the comprehensive application of the tools, methods and data described does not remove all risk associated with a new product launch. The market will always throw us curveballs in the form of competitive actions, economic conditions and other external factors (weather, disruptive technology, political restrictions, etc.). Application of them throughout the process of screening, ideation, iteration and refinement will, however, mitigate those risks and help move new product failure rates into a much lower, more acceptable range.