You say evolution, I say devolution
Editor’s note: Harvey Lauer is president of American Sports Data, Inc., a Cortlandt Manor, N.Y., research firm. While examples cited in this article involve using research to measure rates of sports participation the broader discussion certainly applies to the industry as a whole.
Since its inception during the first half of the 20th century, there has been a marked evolution - some say devolution - in marketing research data collection. It has been argued that in each successive phase of this downward spiral, methodological purity has been sacrificed for lower cost and greater convenience.
In the beginning, there was the personal interview - the proto-tool of modern survey research. During an innocent and pristine era of only a few decades past, legions of interviewers were dispatched on foot to conduct often lengthy, sometimes very sensitive door-to-door interviews. Americans were “taken in” by the privilege of being interviewed, and interviewers were welcomed into living rooms across the nation; the annoyance, cynicism and refusal rates would come later. In 2005, the golden age of in-person research is long gone, and while in isolated redoubts of virtue a few grizzled holdouts defend this ancient ritual, the face-to-face interview is nearly extinct.
Telephone research can be traced to the 1930s, but such early usage, according to Gad Nathan of Hebrew University, was to augment other forms of research. The notorious Literary Digest survey of 1936 (which incorrectly predicted a landslide by Landon over Roosevelt ) has been wrongly attributed to the telephone survey. Although the sample was selected from a list of telephone owners, the ill-fated study was actually conducted by mail.
With the ubiquity of the telephone came the phone survey - a medium that would dramatically reduce the cost of research. In 1970, U.S. household telephone penetration reached 88 percent - sufficient to appease journeymen, but not purists. But by the 1980s, the heir-apparent to personal interviewing - righteously cloaked in the robe of random-digit dialing - had effectively dealt its progenitor the coup de grace. Telephone research had subverted all the defenses of the face-to-face method: it was much cheaper, and remarkably - especially in light of its present troubles and rapid decay - the telephone was achieving higher cooperation rates than its predecessor! The only disadvantage of the telephone interview was survey length; a live interrogator could remain in someone’s living room for an hour-and-a-half, but most telephone respondents would not tolerate impositions in excess of 30 minutes.
Timelines are murky and overlapping, but at some point in the power struggle, the heir was forced to abdicate. Researchers ingeniously shifted the burden of data collection to the respondent, who was persuaded to fill out questionnaires at the kitchen table, dutifully return them to the company, which - having eliminated the expense of a human interviewer - could now perform research at a fraction of even telephone cost. Thus was the consumer mail panel born, circa 1946. Exact birthdays aside, it is safe to say that consumer mail panels (very different from “cold” mail surveys) came of age after the heyday of telephone research. It may also be safe to say that in 2005 - despite the encroachment of the Internet and a small annual erosion of response rates - consumer mail panels are only slightly past their prime...but the cascade has begun.
The fourth generation of this pedigree is online research. Cheaper than even the mail panel (entrepreneurs have repealed interview labor, data entry, printing and postage costs!) this state-of-the-art genre is a bonanza so huge, it threatens traditional research standards and propriety. Like all other data collection methodologies, online research has its place; but because of severe structural flaws and an Internet usage rate below 70 percent, this futuristic technique cannot yet lay claim to nationally projectable samples. Nonetheless, we see more cavalier and more frequent references to “national surveys,” where unfastidious researchers - innocently abetted by journalists - omit the caveat of non-projectability, inflicting a gigantic hoax on the American public. Astonishingly, the perpetrators are seldom if ever challenged. Still worse, members of online research panels - lured and continuously motivated by prizes and economic incentives - are highly self-selected: atypical even of the Internet users they purport to represent.
Unduly harsh critics add that this instantaneous, speed-of-light technique encourages sloppiness, methodological looseness and impropriety - backsliding trends in survey research that parallel the rise of Generation Y, and as a more general proposition, the erosion of American cultural and business values.
In an abrupt, manic swing toward the virtues of Internet research, we can also speak about the eventual dominance of the method: it is by far the most agreeable survey-taking experience, and numerous other advantages practically ensure that long before it deserves the honor, online panel research will receive the imprimatur of legitimacy. As telephone penetration achieved 88 percent in 1970 and became acceptable to generalists, so may the online method one day achieve universal respectability - but not before it reaches the 90 percent penetration level, and certainly not before overhauling its infrastructure. But as a leading-edge tool for the conduct of “boutique” research not requiring projectability, (individual health club surveys for example) online research has already proved invaluable.
Personal interviewing
The personal interview belongs to a recent but already mythic era of American history. Only four or five decades old, this near-perfect slice of Americana was by nature, very friendly to an immature marketing research industry. Parents, bosses and teachers were feared, the work ethic abided and civility respected; and by today’s standards, honesty remained a cherished value. The innocent culture that required suitably dressed families to regularly dine together also insisted that retail customers be fawned over; and by complex extension, naïve Americans somehow condoned the mass invasion of their living rooms by armies of hired personal interviewers.
These were the salad days of research, when communities were safe, homes accessible, respondents agreeable. People were flattered by the opportunity to serve as research subjects, and it may have even been possible to select and interview a true random population sample - where all people in the U.S. had an equal chance of being heard. But in 2005, “probability” samples are lost to antiquity and research mythology.
In a classical face-to-face interview for example, people may have been a little embarrassed about sedentary behavior; chances are the method resulted in some exaggeration of active sports and fitness participation levels. And when “under-the-gun” of a live interviewer, people undoubtedly exaggerated the frequency of sports participation - how many days per year they engaged in various sports and activities, etc.
But all these questions are moot, because the personal interview is now an archaic curiosity - fossilized by prohibitive cost, security fears, gated communities, high-rise buildings, inner-city inaccessibility, working women, harried lifestyles, and more recently, a heightened sensitivity to personal intrusion.
Telephone interviewing
Telephone interviews could be conducted at a fraction of the former cost, and while they eventually became respectable, they had to be brief: a survey of more than 30 minutes ran the risk of being aborted. Another disadvantage of the telephone was that for questions demanding careful thought, contemplation (or even simple calculation!) respondents were impossibly rushed, and could not pause for a moment as they might during a personal interview. A mail or online questionnaire by contrast, allows unlimited meditation.
Although people are more willing to discuss sensitive matters on the phone, there is still inhibition and reticence. Even on the phone, the influence and pressure of a live interviewer is still palpable - a limitation of both telephone and personal interviewing to which the private, anonymous mail or Internet survey is immune.
In recent years, cell phones, voicemail, multiple land lines, telemarketing “research” scams, no-call lists, working women and general time constraints have all conspired against the telephone method. With the possible exception of extravagantly-funded government research featuring unlimited callbacks, a net completion rate of 30 percent may be the norm. This helps explain why, after years of slower depredation by consumer mail panels, the telephone sector is being smashed to pieces by the juggernaut of online research.
All things considered, telephone surveying is not the methodology of choice for sports participation. Interviews are too brief to cover a wide range of sports, and when under the extreme pressure of a phone interview, people become unnerved and susceptible to all manner of memory distortion.
Consumer mail panel research
The next watershed was the consumer mail panel. In this paradigm shift, literally millions of cooperative respondents (drawn from the ranks of ordinary households) were recruited to answer self-administered mail questionnaires on a variety of subjects generally (but not exclusively) related to consumer products and marketing. As of 2005, the three major consumer mail operators in the U.S. - TNS-NFO, NPD and Synovate - have aggregate offline panels totaling nearly two million American households. But this number is withering before the onslaught of Internet methodology.
While panelists are usually given incentives for longer, more tedious or unusual questionnaires, financial gain is not a main motive for membership - a monumental distinction between the mail method and its online successor. Mail panelists are generally “product-oriented,” enjoying free samples of new, not-yet-released offerings, or otherwise being on the leading edge of consumer marketing research. When compared with non-panel households, they are also more educated, literate and upscale. Roughly 5 percent of American households can be recruited for consumer mail panels, but - quite fortuitously - this apparent lack of representation does not disqualify the methodology from producing valid, national projections of many types of consumer behavior. Indeed, the well-known and highly respected Consumer Confidence Survey is based on mail panel methodology. Pre-recruited consumer mail panels generally yield 50-70 percent response rates, compared with only 5-20 percent for “cold” mail surveys of the general population…even when the latter are seeded with generous incentives.
Detractors of the method insist that a swath of panel members cannot represent a true sample of the U.S. population. On its face, this is a convincing argument; upon greater magnification, it becomes spurious. The ideal of survey research is a miniature replica of the larger reality; but academic purists, untutored research salesmen hawking competitive methodologies and other fuzzy thinkers incorrectly believe that to execute a valid survey, the characteristics of a sample must match those of the larger universe. The mail panel member, they argue, is too “queer a duck” for this purpose.
Ideally, it is desirable for a sample to clone the larger universe, but this is not an absolute precondition - if the measured phenomena (in this case, sports participation behavior) are found to be similar or identical in both groups. Very simply, certain characteristics of a sample may be different from those in the universe it tries to mimic; but despite these differences, both populations exhibit similar behaviors. After the usual sample balancing weights are applied, the rates of sports and fitness participation behavior among panel members are similar to those in non-panel households - a concordance first observed in the 1980’s between our firm’s mail panel surveys and the celebrated Gallup poll. While panel and non-panel households may differ on a single issue - the willingness to join a mail panel - they can be (and are) highly compatible in other attitudes and behaviors. Sample bias need not translate to results bias; and differences in panel member composition notwithstanding, consumer mail panels are a perfectly viable methodology for sports participation research. In fact, when advantages of all methods are weighed and credited, consumer mail panel research could even emerge as the preferred methodology. But preferences of 2005 have no claim to immortality.
The Achilles’ heel of mail research is a lack of control over the respondent. A physical interviewer - either on the phone or in the flesh - ensures that the proper household member responds to (and understands) each and every question. In a mail survey of any kind, respondents - for reasons of disinterest, fatigue, mischief, laziness or any other reason - may, quite whimsically, skip items in a long battery, ignore parts of a question, or simply answer a survey in a sloppy, haphazard manner. Deliberate sabotage is rare, but unless controls are in place, less-than-scrupulous respondent behavior can distort the results of a mail study.
In mail panel parlance, recent recruits (“Eager Beavers”) tend to be more conscientious than are seasoned panelists (“Lazy Dogs”). So in a tracking study, one needs from year to year to include identical proportions of fresh, enthusiastic panel members and their more experienced counterparts - especially the bored and disaffected. Through a form of sample balancing targeted to a base year - in this case the use of “panel tenure” (the number of years each respondent has been a panel member) as a weighting variable - the researcher can control for this type of response “decay.”
When people respond to mail surveys in a diligent and meticulous manner (and if controls are in place to guard against those who do not), the net quantity and quality of information provided for sports and fitness participation can equal or surpass that of any other data collection method. The key advantages of self-administered questionnaires of any type, and for any subject, are:
- The ability to collect more information. It enables the measurement of many sports - far more than would be possible in telephone, and even face-to-face interviews. The ASD SUPERSTUDY of Sports Participation, monitors 103 sports/activities in a single study.
- A more relaxed, unpressured environment allows the respondent to provide more thoughtful and considered answers; and for numerical recall, this advantage is monumental. When asked by a phone or personal interviewer how many years he or she has participated in a given sport/activity (or the number of times per year), the interviewee is on the spot - he or she must instantly blurt out a number. But in the solitude and serenity of a kitchen, family room or den, people can provide much better (if still imperfect) written answers.
- A private, anonymous setting, shielded from the influence/intimidation of a human interviewer, also evokes far more candid responses - especially when the material is sensitive, potentially embarrassing or threatening. In a live interview, a respondent might not confess true body weight or sedentary behavior; but a self-administered questionnaire could topple these inhibitions.
- The reasonable assurance that panelists are not “in it for the money.”
Online research
Online research is the new frontier of data collection methodology, a fourth milestone following in-person, telephone and mail panel interviewing. Modeled after the consumer mail panel, the phenomenon of online research (or some variation thereof) may be the final paradigm, destined to supersede all traditional forms of direct consumer data collection. Like all great technological revolutions, the new methodology offers untold possibilities - for better or worse.
With over 65 percent of all U.S. households having Internet access, and a much higher penetration rate among young people (who still represent the prime sporting goods market segments) online panel research is becoming an increasingly attractive tool for sports marketing studies. When projectability is not a requirement, its potential increases logarithmically. But the self-selected composition of an online panel will always be a lingering question-mark; and to the degree that online panelists are thought to be prize-motivated, the question becomes serious, if not insuperable.
Online panel research has inherited some, not all, of the virtues of the offline method; but the virtual abolition of both questionnaire mailing and printing expense makes this new technique considerably less costly than its pencil-and-paper forerunner. However, online panel operators have not yet passed this theoretical savings onto their clients; in many cases, when completed interviews and data collection volume are equalized, Internet research - ostensibly due to front-end programming costs, but also because of huge incentives - proves more expensive than its older, hardcopy rival.
Like the consumer snail-mail method, online surveys may capture relatively large amounts of information while providing a quiet, anonymous setting. Unlike its glacial predecessor - which relied on the U.S. Postal Service - this emerging research technology can deliver huge study samples almost instantaneously.
Gargantuan online panels (5+ million members!) allow researchers access to low-incidence, hard-to-find populations such as treadmill buyers or big-game hunters of a particular species; but on the other hand, the Internet is still an exclusive club, which continues to bar certain groups. For example, the large offline population of “Bubbas” (the psychographic most fond of hunting) is denied representation in an online study, creating an incomplete and unpromising sports participation research venue for that industry.
Returning to incomparable advantages of the Internet: the availability of graphics, video and audio clips - features invaluable to concept tests, market studies and product-focused research. Add to this other forms of visual enjoyment, ease-of-operation and further control over the respondent and the dominance of online research is almost inevitable.
Most critically, the online method silences a major objection to hardcopy mail research: it can force an answer to each and every question in a survey - potentially the greatest single advantage of Internet research over the mail method. But as Don Dillman of Washington StateUniversity has suggested, this “benefit” has a double edge: it could irritate prize-mongering respondents who are racing to the “submit” button, accentuating slovenly response practices.
When panelists remain true to the mission, good research is possible. But the monetary lure reigns supreme, and to the extent that respondents deviate from the straight-and-narrow, economically-based recruitment and frequently-used incentives become major structural flaws of online panel methodology. Technology and economic incentives may be transforming a pool of once-diligent, civic-minded respondents into a horde of game-playing prize mongers who view survey content as a necessary evil - an annoying obstacle to a grand prize, to be dispatched as quickly as possible. In this new paradigm, says Dillman, “the implied social contract between researcher and respondent” has been fundamentally altered.
For any attitudinal, behavioral or public opinion measurement in which online and offline populations differ significantly, there can be no valid claim to national projectability - regardless of any “algorithms” or “weights” which purport to “adjust” the data. “Balancing” an online sample by using Census demographics of the “entire” U.S. is - quite euphemistically - a fandangle. This is because the offline population (the other 30 percent) is a vastly different breed; the attitudes and behaviors of an unwired population cannot be divined from its “wired” counterparts.
Defenders of online research disagree, however. They point to comparisons with other methods, where similarities have been found on a wide range of topics, such as the incidence of specific medical conditions, or the general consumption of mass-market commodities, e.g., toothpaste or cereal. But some of these claims are specious, because Internet access is not a defining characteristic of such general conditions or mass-market behaviors. It comes as no surprise (and proves nothing!) that razor blade purchases may not differ significantly between online and offline populations.
When Internet usage correlates strongly with lifestyle or some core element of attitude or behavior, the two populations diverge rather sharply. ASD experimental research has confirmed for example, that males 18-34 with online access have much higher sports participation rates than offline members of the same demographics, even after all other factors (especially income) have been equalized. At first glance, it seems counterintuitive that Internet users are more active than their offline counterparts. But the online population is also younger, more affluent and better-educated than those without access to the Internet; so it is not unexpected that the former are prone to be more active in sports/fitness participation.
In any event, the flimsiest pretext of valid “projectable” online data requires ongoing, cost-prohibitive, parallel tracking research; and even then, such a claim would be highly questionable.
As Internet penetration approaches 90-95 percent, the problem will recede, and when recruitment/incentive strategies are reconsidered it may eventually disappear. One day there will be valid, national online surveys; but that day is far off. Right now, if they must write about such things, journalists should use a much more scrupulous description of interactive research: “a nationally representative survey of a highly self-selected element of the U.S. online population.”