Chapter 10.4

Large scale studies and quantitative methods

Yuen-Kuang Cliff Liao

Yungwei Hao

National Taiwan Normal University

Taipei, Taiwan

Abstract: This chapter focuses on the types of research questions that might be answered by using meta-analysis in the area of information technology (IT) in education and the strengths and weaknesses of such meta-analyses. The chapter is divided into three parts: (a) description of the meta-analysis research method, (b) examination of a range of meta-analysis studies of the impact of IT in education, and identifying what kinds of evidence outcomes can be achieved through meta-analysis, and (c) discussion of the strengths and weaknesses of meta-analysis in IT in education, and what are the most reliable methods to obtain accurate evidence about the impact of IT in education.

Key words: meta-analysis; computer aided instruction; distance education, impact of IT; research method

The Meta-analysis Research Method

One study, even a well-designed one with a large sample size, will become more useful if its results are examined in the context of others in the field. Integration of research findings across studies, either qualitative or quantitative, has its merits and plays an important role in any area of research. Quantitative reviews of large scale studies have a long past and since the early 1930s reviewers have used special statistical tools for combining results from a series of empirical studies. The work carried out before Glass’s development of meta-analytic methodology in 1976 is still exerting an influence on research reviews (Kulik & Kulik, 1989).

Definition of Meta-analysis

Research syntheses using meta-analytic methods have made the 1980s an extraordinary time in the history of research into teaching and learning. Cooper (1984), Glass (1977), Glass, McGaw, and Smith (1981), Hedges and Olkin (1983; 1985), Hunter, Schmidt, and Jackson (1982), Jackson (1980), Light and Pillemer (1982), and Rosenthal (1976), among others, provide excellent examples and insights into the use of meta-analysis for measuring research outcomes into teaching and learning. The primary purpose of meta-analysis, which was first advocated by Glass (1976), is “the statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings” (p. 3). Since 1976, literally thousands of reviews have been conducted using meta-analysis methods; a title-word search using the term “meta-analysis” performed on ERIC through FirstSearch resulted in 3357 studies. The applications of this method are not only in education but in many other areas such as sports and exercise (Doherty & Smith, 2005; Martyn-St James & Carroll, 2006), medical research (Aertgeerts, Buntinx, & Kester, 2004; Bischoff-Ferrari, Dawson-Hughes, Willett, Staehelin, Bazemore, Zee, & Wong, 2004), economics (Rubb, 2003) and politics (Roscoe & Jenkins, 2005).

For years, there have been different views regarding what exactly constitutes a meta-analysis. Hunter and Schmidt (2004) modified Bangert-Drowns’s classication (1986) and categorized five forms of meta-analytic methods: Glassian meta-analysis; study effect meta-analysis; homogeneity test-based meta-analysis; the Schmidt-Huter meta-analysis; and validity generalization, psychometric meta-analysis. These meta-analytic methods differ in terms of the purpose of review, unit of analysis, treatment of study variations, and outcome of analysis. However, all of the techniques apply statistical methods to the treatment of quantitative representations of study outcomes. This distinguishes meta-analysis from traditional and more informal narrative forms of review such as “vote counting” or “box-score” reviews (Liao & Bright, 1993).

Procedures for Conducting a Meta-analysis

There have been variations on the basic meta-analysis procedures, depending on the data given. However, the following seven steps were usually included (Roblyer, Castine, & King, 1988):

Step 1: Set criteria for studies to be included;

Step 2: Identify variables which contribute to effect size’

Step 3: Find studies meeting criteria;

Step 4: Calculate individual study effect size;

Step 5: Correct individual study effect size for sampling error;

Step 6: Combine individual studies to determine overall effect size;

Step 7: Identify relationship of effect size to study variables.

Wolf (1986), in addition, highlighted 13 guidelines for the practice of a meta-analysis.

1. Define and report criteria for the inclusion and exclusion of studies.

2. Search for unpublished studies in order to test type I error.

3. Develop coding categories to accommodate the largest proportion of the identified literature.

4. Examine multiple independent and dependent variables separately through blocking mediating effects.

5. Examine and graph the distribution of results and look for outliers to examine more closely.

6. Check the reliability of raters who code study characteristics.

7. Always accompany combined tests of significance with estimates of effect size.

8. Calculate both raw (unadjusted) and weighted combined tests and effect sizes to empirically examine the impact of sample size on results.

9. Consider whether it is important and/or practical to calculate nonparametric as well as parametric effect size estimates.

10. Consult the literature on meta-analysis for guidance when in doubt.

11. Combine qualitative reviewing with quantitative reviewing.

12. Describe the limitations of your review and provide guidelines for future research concerning relationship reviewed.

13. Remember Green and Hall’s (1984, p.52) dictum: “Data analysis is an aid to thought, not a substitute.”

For years statistical formulae and procedures of meta-analysis have been modified in an effort to reduce errors which may arise from the variation of individual studies and make effect size a true estimate of the impact of a given treatment.

Advantages of Meta-analysis

There are a number of potential problems with a traditional literature review. These include (a) selective inclusion of studies often based on the reviewer’s own impressionistic view of the quality of the study, (b) differential subjective weighting of studies in the interpretation of a set of findings, (c) misleading interpretations of study findings, (d) failure to examine characteristics of the studies as potential explanations for disparate or consistent results across studies, and (e) failure to examine moderating variables in the relationship under examination (Wolf 1986).

In contrast, meta-analysis has been viewed as an efficient way to summarize the findings of many studies (Green & Hall, 1984), while providing several distinct advantages over traditional methods of synthesis (Hauser-Cram, 1983; Jackson, 1980). Abrami, & Bernard (2006) listed nine advantages of meta-analysis: (a) it answers questions about effect size; (b) it systematically explores the source of variability in effect size; (c) it allows for control over internal validity by focusing on comparison studies vs. one-shot case studies; (d) it maximizes external validity or generalizabi1ity by addressing a large collection of studies; (e) it improves statistical power when a large collection of studies is analyzed; (f) the effect size is weighted by sample size; large sample studies have greater weight; (g) when a review is updated, it allows new studies to be added as they become available or studies to be deleted as they are judged to be anomalous; (h) it allows new study features and outcomes to be added to future analyses as new directions in primary research emerge; (i) it allows analysis and re-analysis of parts of the data-set for special purposes (e.g., military studies, synchronous vs. asynchronous instruction, Web- based instruction); and (j) it allows comment on what we know, what is new, and what we need to know.

Other researchers (Green & Hall, 1984; Light & Pillemer, 1982; Wolf, 1986) have suggested that meta-analysis is helpful in highlighting gaps in the literature, providing insight into new directions for research, and finding mediating or interactional relationships or trends either too subtle to see or that cannot be hypothesized and tested in individual studies. In general, meta-analysis offers reviewers advantages over other methods of synthesis that are similar to the advantages gained by primary researchers who have the opportunity to move from a small pilot study to a large-scale investigation in which there is large sample and a wide range of subjects and measures (Hauser-Cram, 1983). Meta-analysis “is an important contribution to social science methodology. It is not a panacea, but it will often prove to be quite valuable when applied and interpreted with care” (Jackson, 1980, p. 455).

Criticisms of Meta-analysis

Just like any other research method meta-analysis has not been free from criticism. Critics of meta-analysis have found fault with techniques and features that are characteristics of this approach. For example, meta-analysis can only assess relatively direct evidence on a given topic, and it is perhaps difficult to achieve valid and reliable coding of the characteristics of the primary studies to be analyzed. In addition, when the available studies for a given topic are few in number and their results are relatively heterogeneous, the findings of a meta-analysis may mislead our understanding of the given topic. These same concerns, however, could be addressed to traditional forms of reviews. More substantial concerns that have been raised can be grouped into four categories (Cook & Campbell, 1979; Glass et al., 1981; Hunter, & Schmidt, 1990; Jackson, 1983; Wolf, 1986; Wortman, 1983):

1.One of the most frequent criticisms against meta-analysis is that it mixes apples and oranges; that is, meta-analysis combines studies that are so different that they are not comparable.

2. The inclusion of methodologically poor studies in the review can result in misleading results in meta-analysis.

3. The representation of individual studies by multiple effect sizes can result innon-independent data points, and misleadingly large samples.

4. The selection bias in reported research can lead to biased meta-analysis results, particularly when published studies, which often show more statistically significant results and have larger effect sizes than unpublished studies, have a greater possibility of being included in a meta-analysis.

Improvements in meta-analysis techniques by some researchers (e.g., Kulik, & Kulik, & Bangert-Drowns 1985; Hedge & Olkin, 1985; Hunter, & Schmidt, 2004; Lipsey & Wilson, 2000; Rosenthal & Rubin, 1982) have answered some of these criticisms, but it remains up to the researcher preparing a synthesis to determine whether the advantages of the meta-analysis technique outweigh the disadvantages for the particular area under investigation. In general, our view is that meta-analysis techniques add considerably to our understanding of phenomena related to teaching and learning. As noted by Abrami, & Bernard (2006) “It goes far beyond what a single study might ever hope to contribute about a phenomenon and provides a greater case for the generalizability of results across populations, materials, and methods.”

Review of Studies of Meta-Analysis on Information Technology in Education

The period from 1972 to 1986

Owing to the widespread adoption of information technologies into the processes of teaching and learning, the number of research studies into the impact of IT in education and more recently, with the growing use of the Internet, on distance education and online learning, has been proliferating in recent decades. These large numbers of published studies have enabled educational researchers to conduct studies of meta-analysis. Walberg (1983) introduced and illustrated the methods of research synthesis, summarized the substantive findings regarding teaching, and evaluated methods of the reviews. These syntheses included the studies before 1983, summarizing findings on general teaching and areas of instructional technology (IT in education). There were a few types of reviews introduced: a review of reviews of teaching effects (Waxman & Walberg, 1982), the sixteen research syntheses which Walberg (1983) did, the University of Michigan group’s team approach to eleven syntheses, synthesis of bivariate productivity studies completed by the group at the University of Illinois at Chicago (Walberg, 1984; Walberg, Schiller, & Haertel, 1979), synthesis of multivariate studies also completed by the same group (Walberg, Pascarella, Haertel, Junker, & Boulanger, 1982), syntheses of open-education research (Giaconia & Hedges, 1982; Hedges, Giaconia, & Gage, 1981; Horwitz, 1979; Peterson, 1979), and syntheses of instructional theories (Haertel, Walberg, & Weinstein, 1983).

Another large-scale synthesis on instructional technology was completed by Roblyer, Castine, and King (1988) who assessed the impact of computer-based instruction by reviewing twenty-six research studies between 1972 and 1986. Thereviews from this period did not reach to similar conclusions, and only few clear agreements were found among the findings of the review. However, nearly all the reviews seemed to yield evidence that computer-based treatments offered some benefits over other instructional methods (Roblyer, Castine, & King, 1988). While Walberg (1983), and Roblyer, Castine, and King (1988) established their syntheses by 1988, this chapter in the Handbook intends to continue the work, and includes the studies published since 1988 till 2006. The compiled data were based on 44 studies utilizing meta-analysis techniques.

The period from 1988 to 2006

The studies included in this research met the following predetermined criteria:

  1. They included quantitative results. In the results, cognitive, affective, social, or psychomotor performance/ skills were the dependent variables. Computer-assisted instruction, computer-based instruction, distance education, or Internet technologies were the treatment.
  2. They had experimental, quasi-experimental, or correlational research designs.
  3. A few key-word search (Computer aided instruction (CAI), computer based instruction (CBI), distance education, online learning, computer mediated communication (CMC)) was performed on ERIC through First Search, and was limited for the publication year from 1988 to 2006. Because the Dissertations Abstracts International was unavailable to the author,dissertations were not included in the study.

The 44 studies were grouped into two categories: (1) Computer-assisted instruction (CAI): Computer-based instruction, computer-assisted instruction, information and communication technologies (ICT), teaching or learning with multimedia, and all kinds of tools related to educational technology; CAI is used as a general term here. (2) Distance education (DE) and Internet technologies: Tele-courses, online learning (synchronous or asynchronous), and courses utilizing the media of online computer-mediated communication belong to this category; DE is used as a general term here. When combining the effect sizes of both CAI and DE together, the overall effect sizes were 0.29 and 0.06 for the cognitive aspect and affective aspect, respectively. The overall effect sizes for CAI were 0.41, 0.15, and -0.02 for the cognitive, affective, and social skill aspects, likewise. Table 1 and Table 2 list the meta-analyses, included in the present chapter, for CAI vs. conventional instruction and DE vs. conventional instruction, separately; the overall effect sizes are at the bottom of each table.

The further analysis was to compare the different effects between K-12 and other students for both CAI and DE meta-analyses. For CAI, the effect sizes were almost identical for K-12 and non K-12 students on every aspect. However, for DE, the results showed that the effect sizes of non-K-12 was slightly higher than that of K-12 for the cognitive and affective aspect.

Table 1

Overall effect sizes for 30 meta-analyses of CAI vs. conventional instruction

Author / Number of Studies / Effect Sizea / Method Appliedb / Subjectc / Grade Level
Azevedo & Bernard (1995) / Immediate=22
Delayed=9 / Immediate=0.8
Delayed=0.35 / Hedges & Olkin;
Hunter & Schmid; Rosenthal / NA / NA
Bayraktar (2001-2002) / 42 / O=0.27 / Hunter & Schmidt / General science / Secondary and college
Bangert- Drowns (1993) / 32 / C=0.27
A=0.12 / Glass, et al. / Writing / All
Bergstrom (1992) / 15 / Computer adaptive testing= -0.002 / Hedges & Olkin / All / All
Blanchard, Stock, & Marshall (1999) / 10 / O=0.16 / Hedges & Olkin / Math, Reading and language arts / Elementary
Christmann & Badgett (2000) / 26 / O=0.13 / Glass, et al. / All / Higher Ed
Christmann, Badgett, & Lucking (1997) / 27 / O=0.21 / Glass / All / Secondary
Christmann & Badgett (1997) / 26 / O=0.19 / Glass, et al. / All / All
Christmann,Lucjing, & Badgett (1997) / 28 / O=0.17 / Kulik / NA / Secondary
Cohen & Dacanay (1992) / 37 / O=0.41 / Glass, McGaw, & Smith / Health profession education / Adults
Dacanay & Cohen (1992) / 30 / O=0.37 / Glass / Dental education / College or above
Dwight & Feigelson (2000) / 30 / S(impression management)= -0.08
S(self-deceptivenhancement=0.04 / Hedges & Olkin / NA / All
Fletcher-Flinn & Gravatt (1995) / 120 / O=0.24 / Kulik / All / All
Khalili & Shashaani (1994) / 36 / O=0.38 / Glass, et al / All / All
Kulik & Kulik (1991) / 254 / O=0.3 / Glass, et al.;
Cohen / All / All
Lee (1999) / 19 / C=0.41
A= -0.04 / Glass / All / All
Liao (1992) / 31 / O=0.48 / Glass, et al / Problem-solving ability / All
Liao (1998) / 35 / O=0.48 / Kulik & Bangert-Drowns / All / All
Liao (1999) / 46 / O=0.41 / Kulik & Bangert-Drowns / All / All
Liao (2007) / 52 / O=0.55 / Kulik & Bangert-Drowns / All / All
Liao & Bright (1991) / 65 / O=0.41 / Glass, et al / Problem-solving ability / All
Lou (2004) / 71 / C= 0.36
A=0.07 / Hedges & Olkin / NA / All
McNeil & Nelson (1991) / 60 / O=0.53 / Hedges & Olkin / All / NA
Pearson, Ferdig, Blomeyer, & Moran (2005) / 20 / O=0.49 / Hedges;
Lipsey & Wilson / Reading / K7-9
Ryan (1991) / 40 / O=0.3 / Glass, et al;
Hedges / NA / K-6
Schmidt, Weinstein, Niemic, & Walberg (1986) / 18 / O=0.67 / Glass, et al / Special Education / K-12
Soe, Koki, & Chang (2000) / 17 / O=0.13 / Rosenthal / Reading / K-12
Timmerman & Kruepke (2006) / 118 / O=0.12 / Hunter & Schmidt / All / Higher Ed
Torgerson & Elbourne (2002) / 7 / O=0.37 / DerSimonian & Laird / Spelling / K-6
Waxman, Lin, & Michko (2003) / 42 / O=0.41
C=0.48
A=0.46
S= -0.091 / Glass, et al;
Hunter, Schmidt, & Jackson / All / All
The overall effect size of the cognitive aspect: 0.41 (K-12 = 0.32; non K-12 = 0.31)
The overall effect size of the affective aspect: 0.15 (same for K-12 and non K-12)
The overall effect size of the social skill aspect: -0.02 (same for K-12 and non K-12)

Note. aO = overall, C = cognitive achievement, A = affective achievement, S = social skill, bMethod Applied = the meta-analysis procedures used. cNA = data not available.

Table 2

Overall effect sizes for14 meta-analyses of distance education and Internet technologies vs. conventional instruction

Author / Number ofStudies / Effect Sizea / Method Appliedb / Subjectc / Grade Level
Allen, Mabry, Mattrey, Bourhis, Titsworth, & Burrell (2004) / 39 / O=0.05 / Hunter & Schmidt / All / Higher Ed and adults
Allen, Bourhis, Burrell, & Mabry (2002) / 24 / A=0.09 / Hunter & Schmidt / NA / Higher Ed
Bernard, Abrami, Wade, Borokhovski, & Lou (2004) / 232 / C= -0.04
A= -0.1
S= -0.09 / Glass, et al.;
Hedges, Shymansky, & Woodworth / NA / All
Bernard, Abrami, Lou, Borokhovski, Wade, Wozney, Andrew, Wallet, Fiset, & Huang (2004) / 232 / C=0.013
A= -0.081
Delayed = -0.057 / Hedges & Olkin / All / All
Cavanaugh, Gillan, Kromrey, Hess, & Blomeyer (2004) / 14 / O=-0.03 / Hedges & Olkin / All / K-12
Cavanaugh (2001) / 19 / O=0.15 / Cohen & Hedges / All / K-12
Cavanaugh (1999) / 19 / O=0.15 / Cohen;
Hedges, Shymansky, & Woodworth / All except foreign language / K-12
Lou, Bernard, & Abrami (2006) / 103 / O=0.02 / Hedges & Olkin / All / Undergraduate
Machtmes, & Asher (2000) / 19 / O=-0.009 / Hedges & Olkin / All / Higher Ed and adults
Shachar & Neumann (2003) / 86 / O=0.37 / Glass;
Hunter & Schmidt / NA / Higher Ed
Sitzmann, Kraiger, Steeart, & Wisher (2006) / 96 / O=0.14 / Hedges & Olkin / All / Adults
Williams (2006) / 25 / O=0.15 / Glass, et al / Allied health profession / Higher Ed and adults
Zhao, Lei, Yan, Lai, & Tan (2005) / 51 / O=0.1 / Cooper & Hedges / All / All
Zhao (2003) / 9 / O=1.12 / Hedges & Olkin / Second language Ed / Adults
Distance education and Internet technologies:
the overall effect size of the cognitive aspect: 0.17 (K-12: 0.06; non K-12: 0.19)
the overall effect size of the affective aspect: -0.03 (K-12: -0.09; non K-12: -0.03)

Note. aO = overall, C = cognitive achievement, A = affective achievement, S = social skill, bMethod Applied = the meta-analysis procedures used. cNA = data not available.