Editor's note: James A. Rohde is consultant and founder of James A. Rohde Consulting, a Pittsburgh research firm. He can be reached at 412-589-9572 or at james.rohde@jamesarohde.com. This article appeared in the May 21, 2012, edition of Quirk's e-newsletter.
I think that most of us can agree that our primary purpose, as researchers, is to uncover insights about a given topic and it's understandable that we spend most of our time refining our methodological capabilities and diving deeper and deeper into advanced analytics. But more often than not, we are not the ones deciding if our findings are implemented into strategy. So how much attention should we devote to cultivating our presentation skills? How much responsibility do we have to convince decision makers our recommendations are relevant?
While researchers tend to gauge the quality of a project by the methodology, decision makers tend to make their judgments based on what they can accomplish with the information. (Let's also not forget that it's usually the decision makers who set the budgets for our research.) Therefore, if a good writer can clearly communicate the findings of the research, the next step is ensuring the research's relevance by articulating the implications for decision makers.
If we truly want to be an essential part of the decision-making process and shape strategy, the implications of our findings need to take precedence over the findings alone. Regardless of the level or purity of the insights we uncover, our ability to articulate our work and its value is the only way we can keep research relevant.
So, if we expect to be consulted on big decisions, we have to make sure our research is:
-
heard;
- understood; and
- placed in context.
Be heard (aka engage the audience)
The only way a decision maker is going to use research in building a strategy is if they actually hear the results. I'm not talking about getting them to read the deck or even getting them in the room. I am talking about keeping everybody off of their BlackBerrys while you're speaking and presenting a deck that doesn't make the attendees feel like they're dissecting Descartes.
Quick tip: Crosstabs make most non-researchers need a nap. At best they are somewhat dull; at the worst they are confusing and become the focus of the entire presentation.
PowerPoint can make a solid presentation a bit of a dangerous balancing act but it's generally the preferred method of delivery. So when creating a slide, keep two things in mind: 1) as researchers, we find things more interesting than non-researchers and 2) we are typically poor entertainers.
This brings us to two guidelines for any given slide.
Clarity rules. Each slide should have a point. If you have to explain why the slide is important, it belongs in the appendix.
Everything has a purpose. Everything in the slide should support the main point. If it doesn't, it doesn't belong on the slide.
Be understood (aka communicate implications; support with research results)
Always try to remember why we went through all the trouble of executing the research. Somebody had a problem or question and needed insight to inform a decision. All the questions and attributes were only included to address the decision at hand. If you find that half of the questions did not inform the issue then it is fair to say that the results to those questions can live happily in the appendix. Once the research has been completed, focus on conveying what the findings mean to those using the results and support those implications only with the relevant portions of the research.
Don't leave it to the audience to connect the dots - do it for them.
If we're testing two products and 70 percent say they would buy Product A and 73 percent indicate B, the issue at hand is determining which product deserves wider distribution. In a room of non-researchers, we could present the research and let the chips fall where they may but are non-researchers going to consider that number next to the lowercase n under your bar chart? We would hope so but why take the chance? If you have a sample of 100, explain that the two products are equal and don't let people walk away with the wrong idea.
Presenting the implications (e.g., there is no expected difference in demand between launching Product A vs. B) is a great way to ensure that the audience understands what the research is saying and will also raise questions, which is good for engagement. However, you do not want to spend your entire time addressing questions on the first implication, nor do you want those reading your deck to be trying to hunt down how you got to your conclusion. Each implication should be presented on the same slide as the supporting research results. It's frustrating and time-consuming to search for a bar chart that is on slide 10 if you first introduce the concept on the third slide.
Plus, remember that it will not always be possible to present results in person and even if it was, most reports are passed on to other people and departments. In many cases, it's not even the full reports that are passed around but individual slides pertaining to specific issues. To ensure that the research is used appropriately, every deck and every slide within that deck should be able to stand on its own.
To summarize, plan your presentation so that every slide is an implication; each implication is presented with the supporting research; and every slide can stand alone.
Place it in context (aka where the research ends)
I'm not sure anybody is more fearful of taking a data point out of context than a researcher. Most of us are familiar with that slight edge of panic when somebody asks for just that one figure with "no need for explanation." The thing is, there is always need for explanation, not because no one else understands what that number means but because few understand what that figure does not mean.
There will always be limitations to any single research initiative. Nobody knows those limits better than a researcher and they have to be communicated, especially when speaking to a non-research audience.
A classic example come from basic product testing. If we were to present a product to a sample and 70 percent indicate that they would purchase that product, what can we say about that product? Well, depending on what else we asked, we might be able to say a lot. But one thing we cannot say is that 70 percent of customers would buy the product.
Let's try a less-obvious example from the same product testing: Comparing Product A to B, which one had a higher intent to buy? This is a pretty straightforward request that, at a minimum, places the data points in relative terms that avoid absolutes. However, without the full explanations we can get into trouble. Does our sample adequately reflect the total customer base or was this part of a study that only looked at one segment? The proper explanation could have a drastic impact on how the A-vs.-B comparison is understood and how the research is used.
Presenting implications
All of this is really just another reason for researchers to remember to focus on presenting implications instead of just presenting research. By interpreting our own work, we help reduce the risk of data being used improperly. After all, if decisions are made based on what somebody thinks the research says and they're wrong, in the end it is only the research that will take the fall.