RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

by Kim Bartel Sheehan

Assistant Professor, University of Oregon

and Sally J. McMillan

Assistant Professor, University of Tennessee

Direct correspondence to the first author at:

Kim Sheehan

1275 University of Oregon

Eugene, OR 97403

Phone: 541-346-2088

Fax: 541-346-3462

The authors wish to thank Mariea Hoy and Charles Frazer for their guidance
in conceptualizing and conducting the research projects reported in this study.

1

Response Variation in E-Mail Surveys

RESPONSE VARIATION IN E-MAIL SURVEYS: AN EXPLORATION

Abstract

As e-mail and other related technologies have diffused rapidly into a large and heterogeneous population, researchers have begun to explore the possibility of using e-mail as a tool for survey research. However, studies of the technique have primarily compared response rates for studies that use both e-mail and postal mail survey techniques. Research into e-mail as a survey method needs to develop the kind of richness that is found in the literature on traditional postal mail survey methods. This article takes a first step in developing that literature with an in-depth comparison of three studies that used e-mail surveys for data collection. Details are provided on methods for sampling and survey techniques. Hypothesized relationships between issue salience and response rate, and between pre-survey notification and response time, were generally supported.

Researchers’ investigation of computer-mediated communications as a tool for conducting research and collecting consumer data has been on the increase as Internet usage among people around the world continues to grow. Today, as many as 100 million people worldwide have access to e-mail (DOC, 1998), 80 percent of all users log on to the Internet on a daily basis, and the demographic profile of Internet users in the United States is beginning to mirror that of the general population (Kehoe, Pitkow and Morton, 1997). Data collected via home page surveys on the World Wide Web (such as the Georgia Tech studies) are the most publicized efforts for collecting information via the Internet. However, researchers (e.g. Bachmann, Elfrink and Vazzana, 1996; Weible and Wallace, 1998; Schaefer and Dillman, 1998) recently have begun to analyze the use of electronic mail (e-mail) to disseminate surveys and collect data. Research into the viability of e-mail as a survey method has focused primarily on comparing response rates and response speeds of e-mail surveys to those of postal mail. Overall, these studies suggest that e-mail has great potential for survey researchers.

Researchers have reported a wide variation in response rate and speed of response for e-mail surveys (see Table 1). This is not surprising because of the variety of sample populations and research topics reported in those studies. A researcher planning an e-mail survey today has minimal information on which to base estimations of response rate and therefore will have difficulty in determining sample size. The limited published research on e-mail methodology also provides little information to assist researchers with other basic research design issues such as questionnaire development and respondent contacts. Researchers have a wealth of information on response effects for postal mail surveys but the literature addressing such effects for e-mail surveys is minimal.

The purpose of this study is to provide researchers with information that can assist in the design and implementation of e-mail survey research. The literature on response effects in postal mail surveys provides a framework for discussion of key design issues. Sampling and survey techniques for three studies that used e-mail surveys are described in detail. Finally, we examine the impact of topic salience and pre-notification, two key predictors of response in a postal mail surveys, on response rate of e-mail surveys.

Review of the Literature

E-Mail Surveys

E-mail has been characterized as a “promising means for conducting future surveys” (Schaefer and Dillman, 1998), and numerous researchers have recognized the benefits that e-mail provides over postal mail. These benefits include cost savings from elimination or reduction of paper costs and mailing costs (Parker, 1992) and the rapid speed of response (Bachmann, Elfrink and Vazzana, 1996; Mehta and Sivadas, 1995). In fact, a consistent finding of the studies that compare response speeds of surveys delivered via e-mail and postal mail is that e-mail responses are returned much more quickly than postal mail responses (Bachmann, Elfrink and Vazzana, 1996; Kiesler and Sproull, 1986; Schaefer and Dillman, 1998; Weible and Wallace, 1998). In these studies, e-mail response speeds ranged from five to ten days, compared to the response speed of postal mail surveys, which ranged from ten to fifteen days (see Table 1).

Response rates to e-mail surveys, however, do not consistently show benefits over postal mail, and in some cases fall below what may be seen as acceptable levels of response. Kiesler and Sproull (1986) and Parker (1992) reported e-mail response rates of over 65 percent, with both studies showing e-mail response rates significantly higher than the comparable postal mail method. Schaeffer and Dillman (1998) and Mehta and Sivadas (1996) found no significant differences in response rates between the two modes. Several other studies (e.g. Schuldt and Totten, 1994; Tse et al, 1995; Weible and Wallace, 1998) found that e-mail response rates were lower than those of postal mail. Response rates for e-mail surveys (see Table 1) vary from a low of 6 percent (Tse et al, 1995) to a high of 75 percent (Kiesler and Sproull, 1986).

These differences in response rates are not surprising given what is known about response effects in postal mail surveys. The studies shown in Table 1 have homogeneous samples, small sample sizes, and diverse survey topics. The types of sample populations are either employees of a single company (used in two studies) or University professors and Deans (used in five studies), with only one study consisting of a sample of Internet users (Mehta and Sivadas, 1995). Survey topics ranged from corporate and Internet communication to business ethics and TQM. Given the lack of consistency in numerous variables in these studies, the range of response rates and speeds is understandable.

What is missing from the current body of research is a comparison of e-mail survey responses beyond the simple comparison to response rate of postal mail surveys. The body of knowledge about postal mail survey methodology suggests a number of issues that must be considered during the design and implementation of a postal survey and that have the potential to effect response rate and speed. These effects may also be relevant for e-mail surveys.

Postal Mail Surveys

A review of the relevant literature regarding postal mail methodology suggests that numerous design and implementation issues may effect both response rate and speed in this mode. The literature is rich in meta-analyses that provide indications of such effects, and many of these issues will also be relevant for e-mail studies. The literature has reported the following effects in postal mail surveys:

Personalization of cover letter. Personalizing letters addressed to specific individuals has been shown to increase response rates in some postal mail surveys (Dillman, 1978; 1991), while others (Duncan, 1979) found no effect on response rate due to cover letter personalization. In e-mail surveys, the issue of personalization is complex. A certain degree of personalization will be automatic in e-mail because the individual’s e-mail address will appear on a header that is often visible throughout the reading of a message (Schaefer and Dillman, 1998). Beyond this, however, e-mail can be personalized with a greeting or some other type of relevant information that relates specifically to the recipient.

Postage. The consensus among researchers appears to be that including a stamped envelope (versus a business reply envelope) produces higher response rates in postal mail surveys (Armstrong and Lusk, 1987; Fox, Crask and Kim, 1988; Yammarino, Skinner and Childers, 1991). This effect is not yet relevant to e-mail because postage is not yet needed. However, individuals who pay for e-mail usage either by the message or by the amount of time spent online, may feel that the researcher should provide some small reimbursement for that cost. These costs may also limit the response potential, as e-mail recipients may automatically delete the message in order to avoid such costs. Finding a way to address these issues may challenge researchers.

Incentives. Small cash incentives sent with the mailed survey can increase response rate (Fox, Crask and Kim, 1988; Goyder, 1982; Yu and Cooper, 1983). However, diminishing returns on the size of the incentive are evident, indicating that increasing the size of the incentive does not necessarily increase the response rate. It is not currently possible to provide monetary incentives through e-mail, although it is possible to provide other types of incentives (such as the offer of sharing research results). Researchers should consider ways to develop possible incentives that might be “attached” to e-mail. For example, discount coupons from an online vendor might be promised to individuals who complete the survey.

Sponsorship. Meta-analyses (Fox, Crask and Kim, 1988; Goyder, 1982; Heberlein and Baumgartner, 1978) suggest that sponsorship of a study by a University can result in higher response rates for postal mail surveys than can sponsorship from a corporation. However, Yammarino, Skinner and Childers (1991) did not find support for the value of a University sponsorship to effect response rate. Sponsorship of e-mail surveys cannot be as explicit as with postal mail surveys (i.e. the use of a sponsoring organization’s letterhead is not available), but sponsorship can be made implicitly through statements in the survey instrument and through the sender’s e-mail addresses (i.e. an “.edu” suffix on an address would indicate association with an educational institution).

Questionnaire design. Design issues, such as the length of the questionnaire, can effect response. The longer the questionnaire, the less likely people are to respond (Heberlein and Baumgartner, 1978; Steele, Schwendig and Kilpatrick 1992; Yammarino, Skinner and Childers, 1991). This effect is highly relevant to e-mail surveys, where survey length may be measured not only in the number of printed pages but also in terms of screen length (the number of screens containing the survey). Because an average printed page can take up two or three computer screens, respondents may be faced with presumably lengthy surveys of a dozen screens or more.

Anonymity. While some researchers have found that anonymity increases response rates to postal mail surveys (Yammarino, Skinner and Childers, 1991), other studies have indicated that this is not necessarily true (Duncan, 1979; Kanuk and Berenson, 1975). This is a key issue for e-mail surveys because it is difficult to achieve true anonymity in that mode. To do so requires respondents to access anonymous remailers to respond, and this may be beyond the technical competence of some Internet users. However, researchers can assure e-mail survey respondents of confidentiality by informing them that their e-mail addresses will not be recorded with their survey responses and that the survey data will be considered only in the aggregate.

Issue Salience. In postal mail surveys, the salience of an issue to the sampled population has been found to have a strong positive correlation with response rate. Salience was defined as topic that dealt with an important issue that was also current or timely (Martin, 1995). Heberlein and Baumgartner (1978) found that issues salience had a stronger impact on response rate than did any other issue or research design decision including advance notice, follow-up contacts, or monetary incentives. Roberson and Sandstorm (1990) and Martin (1995) also found that salience was a key predictor of response rate for postal mail surveys. Understanding the population to be sampled is an important first step in determining issue salience. Researchers who use e-mail surveys may be able to begin to predict response rate on the basis of how salient an issue is to the individuals who will be solicited to participate in the e-mail survey.

Respondent Contacts. Fox, Crask and Kim (1988) found that pre-notification by letter led to increases in response rates for postal mail surveys. However Heberlein and Baumgartner (1978) found little or no effect associated with pre-notification. Several studies of postal mail surveys (Kanuk and Berenson, 1975; Murphy, Daley and Dalenberg, 1991; Taylor and Lynn, 1998) found response speed was faster for pre-notified respondents than for those who were not pre-notified. Yammarino, Skinner and Childers (1991) suggested that follow-up mailings and repeated contacted seemed to have a greater effect on response rates among those who receive the survey because of an institutional affiliation than among those who receive a general consumer survey. Little consensus was found on the value of multiple pre- and post-survey contacts in postal mail-based surveys. Researchers using postal mail for delivery of messages must weigh the potential benefit on response rate against the cost of multiple mailings.

Because speed of response has been seen as a key benefit to e-mail surveys, enhancing response speed is important for researchers who wish to maximize the potential of the mode. And because most researchers can send multiple e-mail messages for little or no cost, the impact of multiple contacts on response becomes a highly relevant subject for e-mail surveys.

Hypotheses

This study represents a first step in examining e-mail on the basis of methodological factors that grow from the rich literature on postal surveys. Two hypotheses regarding response effects have been developed based on the final two factors reviewed above.

H 1: Rate of response to e-mail surveys will increase as issue salience increases.

H 2: Speed of response to e-mail surveys will be faster from individuals who received a pre-notification of the survey than from those who did not receive pre-notification.

Methodology

Three separate studies that used e-mail for collection of data were examined for this article. Study 1 was conducted in early 1997; individuals invited to respond to the survey were all developers of health-related Web sites. Study 2 was conducted in the summer of 1996; respondents were faculty and students at a major southeast research university. Data for Study 3 were collected from November 1997 through January 1998 from Internet users with a personal e-mail account in the United States. Despite the different sampling frames, the studies were similar in several ways (see Table 2). The survey instruments were comparable in terms of length of the survey and types of scales used to answer the surveys. All studies mentioned a University affiliation, and used a follow-up reminder message. Research results were offered in all studies as an incentive, and all studies promised confidentiality of responses. However, the studies differ in two key ways.

First, the studies differ in issue salience. The topic of Study 1 was highly salient to the subject population. Study 1 asked creators of health-related Web sites to provide information about the site they had created (e.g. when it began, purpose of the site, etc.) as well as more general information about the individual’s perception of the Web. Thus, the individuals had a direct personal interest in most of the questions. The topic of Study 2 was moderately salient to the population. The topic was introduced to respondents as a study of Internet usage habits, and the salience for this group was the fact that the individual collecting the information was a student at the university where the respondents were affiliated. The topic of Study 3 was not salient to the subject population. It was presented as a “doctoral student survey” of Internet usage habits.

Second, the studies differ in terms of pre-notification. Study 1 did not pre-notify respondents, Studies 2 and 3 did send a pre-notification message. This pre-notification e-mail message explained the purpose of the research and notified subjects that they would receive a survey within a designated time period. Subjects were told that they could decline to participate by replying to this first message and asking that the survey not be sent. This technique is similar to direct marketing practices used by organizations such as book and record clubs that default to sending an item unless the consumer declines. Less than two percent of the university faculty, staff and students of Study 2 declined to participate. Slightly more than 11 percent of the individuals with personal e-mail accounts in Study 3 declined to participate in this survey.

Because the procedures for conducting e-mail surveys are relatively new, additional methodological information is provided to guide researchers who wish to use this technique. In particular, we provide information on sampling and survey techniques used for these three studies.

Sampling

Study 1 (Web-Site Developers). The Yahoo directory of health-related Web sites was used as the universe for Study 1. Yahoo does not sequentially number sites within its categories. Therefore, a strategy was needed for identifying the number of sites in the universe, randomly selecting sites, and determining how to apply random numbers to specific sites. The size of the universe was determined by adding totals that Yahoo reported for each health-related category on the “opening page” (or index) of the Yahoo category listings. At the time of the study, the numbers following each of these categories were added to obtain the universe size of 14,794.