Editor’s note: Philip Derham is director of Australia-based Derham Marketing Research Pty.
The value of results from self-completion surveys depend in part on the proportion of the sample that responded, as larger samples generally provide more statistically reliable data. Hence, market researchers properly seek to maximize response rates. But findings from some recent surveys suggest respondent relevance to the survey questions is more important than getting a larger response from the sample. This article details findings from five recent Australian financial industry surveys which, when seen as a whole, suggest respondent relevance may be more important than the overall response rates.
If these findings are replicated in other self-completion surveys in other sectors, researchers can:
- improve their contact approaches to increase the overall response rates, by increasing response rates within specific segments, and
- ensure more of the relevant people, rather than just any people, from the sample respond.
Web-based more common
Currently, about two thirds of the Australian population has home or work e-mail access and as a consequence, Web-based self-completion surveys have become more common. Response rates to those Web-based self-completion surveys have often been reported as if validations to the researchers’ findings and initially led us to review findings from five similar surveys to see how we could further improve response rates. That review suggested the overall response rate was not the issue.
The five surveys reviewed were all online Web surveys, hosted on our Web site and accessed by clicking on a hyperlink in a personally addressed e-mail. The surveys were undertaken between February 2005 and August 2006. Three surveys were of all the e-mail-accessible customers of one financial institution, the fourth was of all the e-mail-accessible customers of a second financial institution, and the fifth sample was an Australia-wide sample of people listed in a national e-mail database. People in this database had been filtered to be adults with transactional accounts, loans or cash investments at any financial institution.
The detail known about these surveys’ samples was limited to the prospective respondents’ gender, age and that they were customers at particular financial institutions. The samples thus were broad population samples and not interest- or category-specific, so had little commonality and their interest in a survey could not be assumed merely because they had supplied an e-mail address to a financial institution. In this, the samples were unlike e-mail panel members who have often supplied an extensive range of information about themselves when joining the panel, allowing the creation and contact of like-minded or like-interested groups, whose relevance to the survey can be more directly determined before the survey is undertaken, and so mostly relevant respondents would be contacted.
Prospective participants in each of the two financial institutions’ surveys were invited to complete the survey by a personally-addressed e-mail from the financial institution’s chief executive officer. The financial institutions believed that their customers would feel more involved if they were invited to participate by the well-known chief executive officers.
Prospective respondents in the national sample were invited to participate by the research company director, whose identity was probably unknown to most, if not all, of the prospective respondents.
To encourage participation, prospective respondents were offered the opportunity to add their e-mail address to the last screen of the survey and enter a competition. The prizes ranged from $A1,000 or one of two iPods or 50 cinema double passes; one of 20 cinema double passes worth $A29 each; one of 10 $50 petrol vouchers; or the winner’s choice of an iPod, a digital camera or a dual-screen DVD, each worth about $A500.
Content influenced who, not how many responded
Each survey was online for two weeks and prospective participants in four of the five surveys were e-mailed once only. The prospective participants in the fifth survey were e-mailed twice, a week apart. As Table 1 shows, whether the sample was that of the national benchmark study or of customers of particular financial institutions, the one-time e-mail response rates were similar - at a one-in-five level.
With the fifth survey, participants were e-mailed twice, and the second e-mail generated a further 6 percent response, giving a final response of 24 percent, confirming previous findings that increased contact increases response rates. (Commercial considerations can, of course, limit preferred research practice.)
If our analysis had stopped here, the conclusions would have been to advise clients to increase their budgets to allow a second e-mail, and not bother whether the invitation was signed by a chief executive officer or an unknown researcher, as response would be much the same. But analysis of results from the first financial institution’s three surveys showed different proportions of customer returns in each sample, and these proportions appeared related to the surveys’ different content.
The first survey was a general satisfaction, product awareness and product use survey, designed to provide answers across a wide range of areas. The second concentrated on financial planning use and intention, and the third survey sought personal loan potential by identifying recent furniture and electrical appliance purchase behavior. Each survey invitation had an e-mail subject line seen as appropriate. These were:
1. [Financial institution name] survey. Your opinions are important. Please respond today!
2. [Financial institution name] survey on financial planning.
3. How do you buy furniture? [A financial institution name] survey.
As Table 2 shows, there were different proportions of segment respondents within each sample. These proportions appear to depend on the survey content, yet noticeably each survey generated a similar response rate and similar overall sample size, from the same universe. This supports the contention that prospective respondents are not prospective “refuseniks” but are people who will respond if and when their interest is piqued by relevance of the subject to them.
The overall reporting conclusion that could be drawn was, the larger the proportion of relevant respondents in a sample, the more effective the analysis and reporting could be for that specific issue. But there was some concern that a thesis of relevance may need to be narrowly constructed to particular behaviors (involved with financial planning or having bought furniture, for example) and so may not be of more general utility.
Was relevance more than actions?
This indicative series of responses led to a reanalysis of a second financial institution’s results, where the survey was also more general in scope but the sample was of distinct segments. That financial institution’s sample had been separated into three segments before the survey was undertaken. The three segments were broadly customers who:
1. had large loans or large cash investments (the profitable customers);
2. had minimal involvement (the value-neutral customers); and
3. made frequent cash transactions (the loss creators).
As Table 3 shows, the proportion of responses from each segment noticeably differed. Almost twice as many of those in the “most value” segment (because of their large loans or cash investments) responded than those in the loss-creators segment (who just used the financial institution to get cash). The segments were constructed by the financial institution and not by commonalities of the sample except in their types of interaction with the financial institution.
This suggests that the externally assigned segments do reflect ranges of relevance of the financial institution to people in each segment. These are then reflected in survey participation. Thus, the market researcher’s task is to make self-completion surveys relevant to the prospects, rather than concentrating on response overall.
Next steps
The findings from these five surveys suggest that properly designed and administered self-completion Web surveys among general populations will generate similar levels of response. What can distinguish the reporting capacity between such surveys is relevance of more of the end sample to the issues being researched.
These findings are for Australian self-completion Web surveys in the financial sector. Others may wish to review their own studies to see how widespread this pattern is.
There is a further need to determine what elements in the subject line alone or in combination with the invitation letter work most effectively to encourage response, but the overall findings suggest the market researchers’ challenge is to ensure their surveys among general population samples encourage the maximum number of relevant prospective respondents to participate rather than just people in general.