Conversations with corporate researchers
John Lo
Director, Marketing Intelligence, University of British Columbia
Who has influenced your career the most?
Years ago, one of my former VPs said to me, in response to a situation in which I was noticeably frustrated, “John, you’ll catch more flies with honey.” In the moment, I took it as him merely offering me an alternate perspective. It wasn’t until later that I realized the wisdom in those words. This was just one example of his mentorship, which helped me to practice empathetic listening and appreciate the importance of building relationships to get things done. I still subscribe to this motto.
How did your role as an instructor of marketing research affect the way you look at mentorship within the MR industry today?
A common component of marketing research pedagogy and curriculum is team-based projects to investigate actual business needs and challenges. I’ve supervised such projects both as an instructor and as a client. And through those experiences I’ve learned that it’s important to recognize where individual team members are at, both in terms of their skill levels, as well as general aspirations. Not every student wants to pursue a career in marketing research and not every team member wants to stay a researcher. I think good mentorship is coaching mentees for the roles they want, not the ones they are in.
As the director of marketing intelligence at the University of British Columbia, what tips would you have for a university that is just beginning to incorporate student or alumni research?
Universities need to take the view that students are future alumni – they are often treated as separate segments or audiences for the university, but in reality it is a single academic and life journey. Students’ experiences throughout their undergraduate programs influence their decision to return as a graduate student and/or to donate back to the university in terms of money, time or mentorship. So if institutions really want to understand alumni engagement, start evaluating the impact of key moments-of-truth for students. Unfortunately, at most universities, research on students and research on alumni are managed by separate teams in separate departments with little visibility to what the other is doing.
Second, many universities are publicly funded institutions, with corresponding accountabilities. As such, marketing research programs often need to include public consultation components. That is, depending on the topic or issue, traditional statistics-based arguments of what would be reliable research may not be sufficient. And in some scenarios, it is equally important to get input from parents, faculty and donors as it is to hear from students and alumni.
What do you think is the biggest misunderstanding within the MR industry regarding panel research?
At the University of British Columbia, I helped to establish one of the first higher education institution insight communities. And over the years, I’ve had the pleasure of speaking with many other universities who are contemplating setting up their own insight community. I would say that the most frequent caution I share is to not look at panel research as a survey project. Rather, research panels are living entities that need to be fed and nurtured. Not only are resources needed to sustain it but new engagement topics need to be constantly sourced – and it is best that these topics come from across the organization. In short, when you start a panel research program, it is a long-term proposition for both the organization and the panel members you recruit.
Panel research is not survey research. Insight communities offer a two-way relationship with panelists, whereas traditional survey research is rather one-directional – typically an “ask and answer” type of interaction. Best practice panel research entails closing the loop for respondents on survey outcomes, creating a sense of connection and ownership over the initiative or strategy being researched. In this way, insight communities have potential to be brand-defining touchpoints for organizations, provided that they are managed as such.
Lastly, there are valid concerns that opt-in panelists are not representative of broader populations because they are a self-selected respondent sample and may be more positively biased in their perceptions and opinions. It is true that panelists are self-selected but this is the reality of all online research. Response bias in data collected online ought to be corrected as best as possible through statistical weighting procedures. Still, in my experience, and occasionally through deliberate testing, I have found that data from insight communities are fully comparable to data from full population surveys. In many cases, I have even found that key performance data – like overall ratings and likelihood to recommend scores – from panel surveys were statistically identical to those collected under random response conditions. Behavioral and engagement data do tend to skew more favorable.