Editor's note: Kevin Lonnie is president of KL Communications, a Red Bank, N.J., research company. This article appeared in the September 27, 2010, edition of Quirk's e-newsletter.
For all the impact of social media on market research, the one that has the potential to turn our industry on its ear is crowdsourcing. Whereas sentiment analysis, blog scraping and even collaborative communities are there to report what is happening in the larger blogosphere or simply react to potential stimuli, crowdsourcing remains the tool that could reshape the customer feedback process.
The perfect storm
Why do I feel that way? I believe the stage is set for the perfect storm.
For over a half century, researchers have gotten by with drive-by interviews. But this one-way relationship is inconsistent with our interactive millennium. Consumers are looking for meaningful dialogues with their favorite brands/interests, and a one-sided discussion isn't going to cut it.
Therefore, the catalyst for our perfect storm comes from our reactive tool set and a general public that is no longer content sitting on the sideline. Our final spark is the organic appeal of crowdsourcing; the natural human instinct for like-minded individuals to come together over topics of great personal passion. And this is what has both CEOs and CFOs drooling. If executed correctly, it can be a less expensive and less risky proposition than the traditional alternative.
Provide context
Of course, being a researcher myself, I feel the need to provide context for this new phenomenon so we can see how it could potentially reshape our industry.
Crowdsourcing was coined by Jeff Howe in a June 2006 Wired magazine article, which subsequently became the basis for a New York Times business best-seller in 2007. To some degree, Howe was inspired by James Surowiecki's 2004 best-seller, The Wisdom of Crowds. Surowiecki's title was a long-awaited response to an 1841 book by Charles Mackay called Extraordinary Popular Delusions and the Madness of Crowds. Mackay's book posited the theme that people acting individually can be pretty smart, but collectively they turn into a mindless herd, a theory largely unchallenged for over 150 years until the Surowiecki book.
In fact, Mackay's disregard for collective wisdom still resonates. Think of the bureaucratic red tape generally associated with committees. Operating under Mackay's paradigm, intelligence and insight are best achieved from the individual working alone, preferably the most qualified and most respected on the matter.
In some ways, market research operates under this principle. For quant surveys, we ask individuals to evaluate a hypothesis or react to our interrogations while working alone. As for the sample, we apply a series of screening questions to root out any potential respondent who does not meet our preconceived notion of relevance to our inquiry.
In contrast to traditional research methods, crowdsourcing is far more democratic. It is based on the ideas of open invitation and engagement. All in all, crowdsourcing embodies the principles of a republic, while traditional research typically makes pre-screening assumptions on who we should engage.
Only selecting those we have predetermined to be the target audience is in direct conflict with the Diversity Trumps Ability Theorem. This theorem was put forth in 2007 by Scott E. Page, professor, University of Michigan - Ann Arbor. Page ran numerous computer simulations pitting the best and brightest (e.g., early adopters, influentials, brand lovers) against the rank-and-file consumer. Surprisingly, the rank-and-file continually solved problems at a far more proficient rate than the Mensa test group. Page theorized that the quota groups were hindered by their similarity; they tended to possess similar perspectives and problem-solving techniques. As it turns out, difficult problems are often best solved by the average Joe/Jane who will employ solutions the best minds would never think to apply.
The bigger, the better
The Diversity Theorem represents the core of crowdsourcing. Crowdsourcing starts big. The bigger, the better is the hallmark of crowdsourcing success stories. By drawing in as many contributors as possible, crowdsourcing brings the necessary diversity into the co-creation process.
I feel that crowdsourcing and market research share the same intentions: to allow the best ideas to flow upward. However, crowdsourcing does not rely on presumption of qualification. The absence of screening allows an abundance of possible ideas. The crowd itself then identifies the tributaries that hold the most potential. Shortly within that process, as was the case with the Netflix Prize, the final candidates dwindle down to just a few.
Within traditional market research sampling frames, we search for the elite consumer within the category and, by so doing, never speak to the outliers who might have made the most significant contributions.
I feel the Diversity Theorem and ability to crowdsource could be used to achieve the next generation of MROCs (market research online communities). If there would be a complaint about MROCs (in the spirit of full disclosure, the author's company specializes in MROCs), it would be that the community reacts rather than creates. When we look at successful crowdsourcing ideation, the crowd has a clearly-defined understanding of the challenge. The problem itself may be open-ended (e.g., improve our algorithm by 10 percent; design a T-shirt that will sell out; improve our coffee cup), but the expectation of creative contribution is crystal-clear.
The worst of the crowd
However, for every success story associated with crowdsourcing, there seems to be an equal number of failures (likely more failures than we've been led to believe, somewhat similar to your friends being much more likely to tell you when they've won money in Vegas). These failures seem to have brought out the worst of the crowd. This is why we still need to heed Mackay's warnings concerning the madness of crowds.
When the crowd is undirected it allows the herd mentality to dominate. As there is no end to the potential folly of crowds, we need a benevolent dictator to keep the crowd on task. Like a cyber-border collie, nipping at the heels of the crowd, the benevolent dictator sets and maintains clear expectations. In other words, the community advocate of the MROC world needs to be far more authoritative in a crowdsourced MROC (CROWD-ROC). When the nature of the community elevates from feedback conduit to that of proactive idea generators, our benevolent dictator needs to keep everyone honest.
Proven crowdsourcing techniques
The next generation of insights communities (IC2) would incorporate proven crowdsourcing techniques. To be specific:
1. Utilize an open-sourced recruitment process to attract those one-in-a-thousand Mission Impossible agents together.
2. Provide a customized interactive Web site to serve as our home base.
3. After a warm personalized welcome, the benevolent dictator must clearly lay out the problem/opportunity area in front of the crowd.
4. The benevolent dictator should prod, praise, critique and encourage members to interact with each other.
5. Implement an interactive voting process to allow the crowd to vet possible solutions until they choose our finalists.
6. Construct a platform to allow members to suggest iterations and improvements to the finalists.
7. Provide complete transparency to the client. As for its duration, I don't believe it should be open-ended. A three-month time frame should typically prove sufficient.
Already we're seeing new Web sites and tools specifically developed to harness the potential of crowdsourcing for market research purposes. One of these is a new site called All Our Ideas. According to the Web site, "All Our Ideas is a research project to develop a new form of social data collection that combines the best features of quantitative and qualitative methods." This new project is a joint endeavor between the sociology department of Princeton University and Google. It relies on open-sourced software which users are invited to help "review, remix or redesign." Currently, the site resembles a discrete-choice exercise, but with an important twist: Respondents have the option of adding their own statements. The site also features a range of charts and data visualization tools to capture overarching themes. In this regard the process is dynamic and capable of improving over time.
Additionally, major corporations such as Dell, Kraft and Starbucks have developed crowdsourcing outlets for their customers and use this input to guide new product development. Right now, Starbucks lists over 20 ideas it is currently working on that originated from its My Starbucks Idea Web site.
See the impact
If we step outside of our own cottage industry, we can see the impact of crowdsourcing across all walks of formative discussions.
For example, in August, Science News chronicled reaction to a critical scientific paper claiming proof that P ≠ NP. P versus NP is considered the greatest question in computer science (for the record, P stands for polynomial time [problems that are solved and checked] and NP stands for non-deterministic polynomial times [problems that are hard to solve but easy to check]).
That's when the crowd surge began. According to Science News, "The paper spurred an intense, open, Internet-based effort to understand it." In fact, the debate that followed became "enormously addictive" and quickly debunked the original paper. "Even at a conference you don't get this kind of interaction happening," says Suresh Venkatasubramanian of the University of Utah. "It was like the nerd Super Bowl."
This type of spontaneous, concerted response is the nature of organic crowdsourcing.
Ensuring success
The beauty of crowdsourcing is ensuring success by having the public develop and then vote for what they want to see come to market. How safe a bet is that! And isn't the primary goal of market research to provide our clients with the confidence that they are making the very best possible decision?
Crowdsourcing has the potential to allow researchers their golden ticket for a seat at the decision-making table. Researchers may no longer be testing the product; we'll be the ones to tell management what the next great idea is.