What are you getting at?
Editor's note: Mark A. Wheeler is an executive director at TVG Marketing Research and Consulting, West Norriton, Pa. The author wishes to thank Schlesinger Associates for providing fielding services and recruiting free of charge to support the original research described in this article. He also wishes to thank the physician respondents who agreed to participate for no incentive.
Many of our clients, as well as the customers of our clients, are routinely called upon to make decisions under conditions of uncertainty. Oftentimes, these predictions are made fairly quickly and with only a limited amount of relevant information. Decision-making under uncertainty may be particularly relevant to health care (although it should apply to clients in any industry). Physicians frequently have to make choices about patient management on the basis of only brief discussions with patients whom they may have barely met. Further, their knowledge and understanding of the prescription drugs in their armamentarium is often incomplete or based upon possibly faulty knowledge or recollections. Figure 1 describes some of the common judgments that doctors routinely make in the process of seeing and treating their patients.
There is some evidence of a trend toward even faster – and more error-prone – physician decision-making. In a recent journal article (Michtalik et al, 2013), researchers surveyed over 500 internists and hospitalists. Overall, the physicians acknowledged that they have been increasingly taking on an increased patient load, leading to a reduced amount of time with each individual patient. They were also quite clear about the implications – many of them admitted that they now had less-than-optimal time to fully evaluate each patient and that their rate of medical errors has increased. Many of them claimed that their workloads have occasionally or often left them unable to discuss treatment options or answer patients’ questions. Physicians also frequently admitted that they have ordered unnecessary tests or procedures because they didn’t have enough time to assess patients in-person. As one might expect, their responses emphasized that there was an overall lessening of patient satisfaction.
The findings about physicians’ increasing workloads are probably unsurprising, although still discouraging. There is a relatively new and highly-pertinent field that is directly applicable to the topic of quick decision-making. Behavioral economics (BE) is, among other things, the study of how we make decisions under conditions of uncertainty. Thought leaders in BE, particularly Nobel Laureate Daniel Kahneman (see his influential book Thinking, Fast and Slow) have documented dozens of heuristics, or shortcuts, that guide our choices, especially when we do not have the ability or time to carefully analyze all of the options. (A list of many, but not nearly all, of the shortcuts is provided in Figure 2.) The problem-solving strategies are reasonable to the extent that they often lead to good, or satisfactory, solutions. They are also economical, in the sense that a person can reach a conclusion with only minimal strain. These shortcuts are said to be driven by a mode of thought called System 1, which works very quickly and requires little or no effort. By contrast, we use System 2 when we actively, effortfully and consciously think through a decision. The distinction between Systems 1 and 2 (summarized in Figure 3) is supported by many years of research from cognitive psychology.
As qualitative researchers and consultants, we are in a unique position to help our clients use BE principles. When we understand the ways that System 1 responds to situations of uncertainty, we can help our clients design better marketing research and also help them present their promotional data and arguments to physicians in ways that build upon BE principles. In my own research, I have been able to apply BE principles and have found that physicians respond more positively to messages that were written to take advantage of decision-making shortcuts and heuristics. Although I cannot share the specific examples that I used (for reasons of client confidentiality), I have conducted some original research (i.e., some applied psychology experiments) to demonstrate the applicability of BE principles to marketing and marketing research, specifically concerning physician decision-making and prescribing.
Research respondents were 84 primary care physicians or PCPs. All were internists, family practitioners or general practitioners. They were screened to ensure that they were board-certified, that they see a minimum of 150 patients each month and that they have been in practice for two-to-25 years.
Respondents completed a brief Internet survey asking them to first consider hypothetical scenarios and then make decisions about what their attitudes or behaviors would be in those scenarios. Each scenario was presented to each physician in one of two ways. Although the two versions of each scenario were technically the same (e.g., the underlying decision was identical), the scenarios were worded differently – the differences in wording were designed to take advantage of System 1 heuristics or shortcuts. Examples of some of the relevant heuristics, and descriptions of each experiment, are below.
Serves as an anchor
When people are asked to estimate an answer, they are often strongly influenced by any possible answer given to them, even if the answer is not plausible. What typically happens, and what has been confirmed in dozens of published studies, is that the suggested answer serves as an anchor – people stay close to that number or opinion and make only a brief adjustment when coming up with their answer. Kahneman provided an example in Thinking, Fast and Slow. Visitors to the San Francisco Exploratorium were asked one of the following two questions:
Is the height of the tallest redwood tree more or less than 1,200 feet?
Is the height of the tallest redwood tree more or less than 180 feet?
Although the answers to these two questions (less and more, respectively) were quite easy, the key question came next. The visitors were then asked to estimate the height of the tallest redwood. People who had earlier been given the high anchor (1,200 feet) estimated an average height of 844 feet. Those initially given the low anchor estimated that the height of the tallest redwood was 282 feet. In each case, the (clearly incorrect) anchor played a massive role in people’s estimates.
In the new research with PCPs, each physician read the scenario described in Figure 4. They imagined that a new SSRI (selective serotonin reuptake inhibitor, like Paxil or Prozac) patient wants to know the percentage likelihood of nausea as a side effect of medication. (The true answer is probably somewhere between 15 percent and 25 percent, depending upon the brand and the dose.) Half of the physicians read that a patient’s friend had told the patient that about 3 percent of new SSRI patients experience nausea. The other half of the physicians read a similar story, with an estimate of 90 percent, again from the friend.
Looking at Figure 4, it is clear that this unreliable anchor was highly influential – PCPs gave an average answer of 29 percent after hearing the higher anchor, compared to only 12 percent following the lower anchor. In fact, of the 42 physicians reading the lower anchor, 29 of them gave an estimate below 10 percent, compared to only four of 42 who read the higher anchor. The doctors were cognitively lazy – they used the percentages given to them by the unnamed friend to help produce their answers, even though this is one of the areas of their expertise. To generalize from this example, whenever patients (or colleagues or reps or marketing researchers) give potential answers to doctors, those answers have an exaggerated influence on the way that doctors will think and the way they will ultimately behave.
Arrive at different decisions
Another cognitive heuristic or shortcut that has been thoroughly documented within BE involves decision-framing, especially the framing of gains and losses. Simply put, when people think about the choices they should make, they frequently arrive at different decisions depending upon whether the focus is upon what they have to gain or what they have to lose.
Kahneman and his colleagues discovered that when we think about what we have to gain, we are risk-averse – overall, it is preferable to take a small gain rather than risk the small gain for the possibility of an even larger gain. The opportunity to gain something is highly pleasurable and even minor gains evoke pleasant feelings in most people. By contrast, when we think about what we might lose, we become risk-seeking. We hate to lose anything and will gamble on the possibility of a big loss in order to avoid the pain of a small loss. In general, the fear of losing something (e.g., money, status, health) is emotionally intense and provokes strong negative feelings. This kind of emotional framing can be a powerful driver of behavior – we act, sometimes dramatically, to ensure that we don’t suffer a loss.
Two experiments were conducted to demonstrate the physician decisions can be driven by the way that a choice is framed to them. The first (inspired by a classic experiment published by Tversky and Kahneman in 1981) asked doctors to imagine that they could treat a teenage girl with acne with either of two drugs, Medicine A or Medicine B. After reading the general scenario (described in Figure 5) physicians read about the two medicines in either a gain frame or loss frame and were then asked to choose their preferred medicine.
Their choices revealed a striking consistency with earlier findings from BE. When the doctors thought about gains (i.e., how much of the acne would be cleared), 71 percent selected the safer option – Medicine A. By contrast, when they dwelled about the loss or “pain” of having acne, only 40 percent went with Medicine A, even though the two frames clearly describe identical choices.
Emotional power
In a demonstration of the aversive emotional power of negative frames, the physicians were also asked to read a brief hypothetical scenario about a new medicine for asthma (Figure 6). The new drug was described as effective but also sometimes accompanied by the serious side effect of exacerbations (e.g., shortness of breath, wheezing, chest tightness). Half of the doctors learned about the clinical data via a positive frame – they read about the large majority of patients who used the medicine without exacerbations. Other doctors read the negative, or loss frame, which mentioned the small percentage hospitalized with exacerbations.
Once again, results showed that physicians are highly susceptible to framing effects, even in their own areas of expertise. When asked to rate their levels of enthusiasm for using the new medicine, they were significantly more optimistic when the information had been presented in the positive frame. The effect held even though it would have been trivially easy to convert the positive frame into a negative one (e.g., to reason that 95 percent who were not hospitalized translates to 5 percent who were hospitalized). The act of reading and thinking about the small percentage of asthma exacerbations in the loss frame was concerning and possibly frightening to them – significantly more than thinking about the majority of patients who avoided the hospital. The way that the data was presented to them drove the drug’s appeal, or lack of appeal.
Heavily susceptible
As demonstrated in these three quick experiments, physicians are susceptible to the kinds of mental shortcuts that characterize System 1 thinking. These effects were readily apparent in domains where the physicians have plenty of experience. While none of the decisions that they made were technically wrong or irrational, they were heavily susceptible to variations in the wording of the questions.
As researchers and consultants, we can incorporate BE into our armamentarium of skills in a number of ways. First, there are many situations in which we should actively ensure that System 1 biases are not unduly influencing our findings. The first, and perhaps most obvious, place is in our conversations with research respondents. The questions that we pose in our interviews are prime areas where we many unintentionally anchor our respondents towards implausible or exaggerated answers. Those of us working in the pharmaceutical space are highly familiar with the kinds of side-effect and adverse-event data that routinely accompany new medicines. While participating in clinical trials, patients are often asked to answer multiple questions about side effects such as, “Did the drug give you nausea?” Simply considering such a question typically stimulates a non-trivial percentage of patients to report that they have the symptom. Lest you believe that that the symptom may be a real effect of the investigational drug, it is helpful to know that patients in the placebo groups frequently report these symptoms at a comparable rate, even when the symptom (say, dizziness) is not a highly plausible result of taking the medicine.
Through experience, physicians have learned to shrug off these kinds of results from clinical trials but they are still vulnerable to anchoring and suggestibility in marketing research interviews. Of course, we all know not to ask leading questions but research in BE and cognitive psychology tells us that even the gentlest hints of a nudge, or anchor, can affect the answers we receive. Consider even fairly mundane question, “How much do you like this product?” or “How well did that work for you?” Even the word “like” should positively anchor the responses here – a safer probe might be “To what extent do you like, or dislike, this product?” Learnings from BE make clear that it is worth taking a second, and third, look at our discussion guides to eliminate words and phrases that anchor our respondents toward an attitude or answer.
An even more compelling – and perhaps more controversial – application of BE comes when we intentionally create scenarios for our customers to take advantage of System 1 principles. Differently put, we want our end customers to think and act favorably towards our product and, by implication, to think and act relatively less favorably towards competitors. BE shows us how to get there.
In recent months, I have conducted research with clients to determine the strongest promotional messages for a new drug. During the consulting and planning phases of the research, we worked together to identify some promotional claims that could demonstrate the power of framing. (Part of my motivation here was to help the client and another part was to demonstrate the utility of BE in marketing research to the client – and also to boost the perception of my own value.) We came up with some product attributes that could be described in terms of comparisons, gains or losses. Two examples are shown in Figure 7. (For reasons of confidentiality, the product’s name and treatment area have been removed from the examples.)
Throughout the research, physicians rated and discussed the merits of a number of different supporting claims. Two of the efficacy claims (which were neither viewed concurrently nor consecutively) are shown in the top half of the figure. The latter of the two statements described a small but significant gain in efficacy attributable to the new drug (as a reminder, people are happy to accept even small gains and are averse to risking or losing them). The first statement also accurately described the data but without the reference to any gains in efficacy. The latter statement was overwhelmingly preferred by physicians as more impressive even though, side by side, it is difficult to rationally argue that there is any meaningful difference between the two statements. In fact, there isn’t any underlying difference since the sentences described the identical data comparison. It was the framing that drove their preferences.
Similarly, in the lower comparison, the taste of the new drug is described in two different ways. All physicians in research were told that each patient in the clinical trials had rated the taste as excellent, good, poor or bad. The first statement represents the results in the positive frame, with the latter statement representing the negative frame. While it should have been easy for physicians to translate each statement into the other statement through simple subtraction, the positively-framed sentence was considerably more impactful. Even the simple reference to “poor” or “bad” findings lead doctors to have a negative or cautious opinion about the medicine’s taste and appeal.
We can also help our clients apply anchoring to promotion. One promising avenue in the pharmaceutical arena comes via the rep detail. Although the content of a drug rep’s sales message is tightly regulated by the FDA, it is still possible for reps to use BE to boost the appeal of their product. A couple of examples are in Figure 8. In the top half of the figure is a script that can set an impressive anchor for a product (i.e., the doctor’s perceived percentage of satisfied patients will be anchored up in the direction of 95 percent). It is similarly possible, albeit less friendly, to establish a negative anchor for the competition (as in the bottom half of the figure).
Powerful and flexible tool
Ultimately, BE can be an enormously powerful and flexible tool in our toolboxes; the examples described in this article barely scratch the surface. Researchers and consultants should dive into the BE literature, especially Thinking, Fast and Slow, to discover additional mental heuristics and shortcuts that drive decision-making and behaviors. By aligning our research and outputs with the thought processes of our end consumers, we can deliver additional value to our clients.
References
Kahneman, D. Thinking, Fast and Slow. 2011.
Michtalik, H. J., Yeh, H-C., Pronovost, P. J., and Brotman, D. J. JAMA Internal Medicine 2013; 173(5): 375-377.
Tversky, A. and Kahneman, D. Science, 211, (1981), 453-458.