A potential bonanza that’s typically a waste
Editor's note: Jonathan E. Brill, Ph.D., is principal, Next Generation Research, Solon, Ohio.
Applied marketing or social survey questionnaires typi-cally feature several open-ended questions. The open-ended question represents but one of two fundamental forms that survey items may take. It requires respondents to compose their answers individually without suggestion from the researcher or interviewer. The other fundamental form is the closed-ended question, where the respondent is required to select his or her answer from a specified and limited set of response choices. In any survey, an open-ended question can be designed for any one of four purposes:
1. Creating simplicity and economy in the interview design.
2. Providing improved validity and reliability in the data relative to that which could be collected through use of an alternative closed-ended approach. Open-ended questions of this type typically are used to measure awareness, knowledge or salience.
3. Stimulating respondent interest, thereby increasing involvement and propensity to cooperate in the interview among those in the sample.
4. Exploring and/or describing an issue or phenomenon about which little is known. Answers that are difficult, if not impossible, to anticipate or otherwise extraordinary in content are solicited. Typically, the hope is to collect data that not only is largely free of a priori assumptions, but also so rich in detail and semiotic or associative meaning(s) that it has great potential to result in new or expanded insights which otherwise could not have been achieved through survey research.
Today, many applied research surveys make appropriate use of open-ended questions in achieving one or both of the first two purposes. Unfortunately, the third purpose, to generate interest in the survey topic, is often overlooked. But most seriously, the last one - collecting detailed and extraordinary information that will lead to new or expanded insight - is rarely achieved in practice. In fact, nearly all exploratory open-ended survey questions accomplish little other than to waste time and research dollars. Given this, the remainder of this article will focus solely on exploratory open-ended questions, discussing the underlying causes that lead to poor results in their use and offering guidelines for collecting more useful information from them.
Problems, problems and more problems
There are many reasons why data gathered by exploratory open-ended questions frequently is disappointing. Still, these reasons may be broadly grouped into two general categories: (1) overuse by researchers; and (2) poor or improper technique in interviewing administration.
Overuse that's often abuse
Numerous experiences of having client organizations share questionnaires and corresponding results of surveys they have sponsored have led me to conclude that exploratory open-ended questions are greatly overused. This overuse seems likely to be due to the combined operation of at least three factors: (1) failure to appreciate the enormous cost of using open-ended questions relative to using closed-ended questions; (2) poor focus on the research issues involved in the study; and (3) laziness in questionnaire development and data analysis planning.
Many managers and research analysts overuse open-ended questions because of a failure to recognize the truly astronomical cost of the exploratory open-ended question relative to that of a closed-ended questionnaire item. The open-ended question's greater expense is a consequence of the higher interviewing costs, greater number of required analyst hours, and higher data entry costs it entails. With open-ended questions, several minutes are typically needed to administer the question and record the answer the respondent provides; an analyst must develop codes for the data after interviewing has been completed (or largely completed); and these codes must be assigned to respondent answers before data entry may be completed. In contrast, with closed-ended questions, seconds - not minutes - are typically required to administer and record the respondent's answer; there is no need to develop codes after or during data collection; and the data entry task can be effected immediately after or concurrently with data collection. And, in cases where computer-assisted interviewing methods are used, electronic entry of closed-ended data items can be completely automated, rendering the data entry function virtually instantaneous and cost-free.
Even greater than the costs of including open-ended questions in interviewer-administered surveys are the costs of including them in surveys conducted by mail or other self-administered methods. While it is true that interviewer costs for administration of open-ended (and closed-ended) questions are absent in mail surveys, several studies have shown that the inclusion of open-ended questions on mail survey questionnaires has a tendency to reduce survey participation rates, often dramatically. From a cost accounting perspective, because the expenditures associated with mail surveys are largely independent of the participation rate, this has the effect of inflating the survey's cost per completed interview. It also reduces the statistical reliability of all the data collected by the study, rendering management decisions based on the data riskier - which is yet another, albeit hidden, cost. Furthermore, as with all self-administered survey methods, item nonresponse rates tend to be considerably higher for open-ended questions than for closed-ended questions, and responses frequently are short, lack specificity, and/or seem incomplete or otherwise unclear. This makes the data less valid, less reliable and less useful than it might have been which, in turn, calls into question the advisability of consuming valuable questionnaire space with the open-end question in the first place. Considering the range and severity of problems presented by the inclusion of exploratory open-ended questions in self-administered questionnaires, one must wonder if their use in mail survey research is ever justified.
Poor focus on the survey objectives also frequently results in the inclusion and overuse of exploratory open-ended questions. Typically, such questions take the form of asking respondents to explain why they feel or behave as they do. There are several potential problems with including this type of question. For one, such questions assume that the respondent is aware of the underlying reasons for his or her feelings or behaviors, an assumption of dubious merit. Second, given that the respondent believes he or she is aware of these underlying reasons, it further assumes that the respondent is sufficiently articulate to clearly communicate and fully explain these reasons without preparation and in the short time window imposed. But, perhaps even more importantly, the open-ended information is rarely sought because it is useful for testing some hypothesis relevant to managerial action (i.e., consistent with the focus of the research), but rather because it is "nice to know" or, even worse, "nice to confirm what we think we already know." In the case of the latter motivation, and in view of the enormous cost of open-ended questions relative to closed-ended questions, this confirmation almost certainly could have been acquired more economically through the use of a battery of items measuring the perceived importance of each known reason. In addition, and not at all incidentally, the relative importance of each reason could have been estimated more precisely and with greater validity by use of this kind of battery of questions.
Certainly, the failures to realize these economies as well as higher levels of measurement validity and precision may be the consequence of poor knowledge and/or skills on the part of the researcher; but all too often they are due to researcher laziness with respect to the questionnaire development and analytical planning processes. Every survey question should be included for an analytical purpose that is identified in advance. Planning of this type requires substantial effort because it requires the researcher to specify all relevant hypotheses and to ensure that the survey items will allow each and every hypothesis to be tested. Yet, too often, it appears as if open-ended questions are included to avoid this important step. Not only is this bad science, but it is also bad business. It is bad science because open-ended questions are plagued by the need to code them based on subjective judgments regarding meaning. (Computerized code development programs greatly lessen, but do not eliminate, this problem.) And, it is bad business because the use of open-ended data produces greater uncertainty. As discussed, an alternative battery of closed-ended questions generally provides greater statistical reliability, improved measurement validity, and increased control in testing hypotheses, all of which reduce the risks inherent in making decisions based on survey findings. For these reasons, much of the overuse of the exploratory open-ended question is truly abuse.
Inadequate interviewer training and supervision
Poor technique or improper administration on the part of interviewers and ignorance or neglect by their supervisors are common reasons that results from open-ended questions prove disappointing.
The (fictitious) example of an irreproachably administered open-ended question, which appears in Exhibit 1, provides a basis for explication of this charge.
Most research users and analysts have seen findings typically gathered by questions of this genre: results tend to be summarized with codes labeled simply as "tastes good," "low price," and "appealing odor."
The problem is that one must be concerned that such codes lack validity and statistical reliability because so few interviewers demonstrate the care and tenacity evidenced in the example. Interviewers typically are not trained adequately and, as a consequence, they fail to probe and clarify the responses that interviewees provide (i.e., solicit specific information and/or ascertain intended meaning of given answers). Sadly, some supervisors are ignorant of proper administration technique, while others (who generally are evaluated on production rates rather than the quality of what is produced) neglect to insist upon these procedures. Consequently, when - as in the present example - a respondent answers "Because of the price," most interviewers would not ask for clarification, failing to recognize the ambiguity in the response and/or presuming the respondent had meant that the price is a low one.
However, as the present example demonstrates the answer given regarding price is ambiguous and, had the interviewer not asked the appropriate clarification follow-up, the counterintuitive finding would not have been possible. The result shown - that the owner is motivated to spend more for a food he or she believes is superior in nutrition and appealing to his or her dog - represents the kind of insight that may form the basis of an innovative and effective marketing communications program, the kind of bonanza that is hoped for from the use of exploratory open-ended questions. But without quality interviewing, such insights rarely are realized. Clearly, without firm knowledge that interviewers are well-trained and tenacious in administering open-ended questions, one must consider how valid, reliable, and helpful the data can be.
Guidelines for using exploratory open-ended questions
The message here is not to condemn the use of exploratory open-ended questions, but rather to encourage their proper use. Their erroneously high cost and the research problems inherent in their administration and analysis argue that they should be used sparingly and with considerable thought. In this spirit, issues that should be raised before including exploratory open-ended questions in a survey and some advice follow.
1. How will the data be analyzed and used? If the answer to this is not clear, eliminate the exploratory open-ended question.
2. Can the information sought be obtained by using one or more closed-ended questions? If yes, make the substitution.
3. Would data from an alternative set of approximately 10 closed-ended questions be of greater or equal value to management than the data from the open-ended question being considered? If the answer is yes, lose the open-ended question and choose the closed-ended questions instead. Even if it is a close call, the closed-ended data are preferable in that fewer threats to data validity and reliability are involved than with the open-ended item.
4. Are your interviewers sufficiently well-trained and supervised to deliver high quality, valid open-ended data? If there are doubts about this, it may be advisable to toss the question. If the question is important, do two things: (1) Look for interviewers who have completed a formal training program such as that developed by and offered through the Marketing Research Association; and (2) Verify that the interviewers are well-trained by monitoring interviewers during pretest interviews of your survey questionnaire.
By following these simple guidelines, you will be rewarded with better survey designs, data offering enhanced validity and reliability, highly focused analyses, and more actionable research findings and results.