Chapter 10.3
Measuring the Impact of IT on Students' Learning
Rachel M. Pilkington
School of Education
The University of Birmingham
Birmingham, UK
Abstract: Has the recent rapid expansion in the use of IT in schools had a positive impact on learning? Research has presented us with mixed results that are often difficult to interpret. Providing computers is certainly no guarantee of their effective use: how IT resources are used in the local context to meet individual students’ needs seems critical to success. In short, the alignment of particular types of IT to particular educational objectives and assessment methods, together with planned, structured and guided activity, are likely to determine whether IT impacts learning. However, this paper argues that the questions of how, when and why IT impacts learning will require more holistic approaches to data-gathering than traditional experimental or survey based approaches have provided. Further research adopting a range of methods is needed if we are to discover precisely how particular combinations of IT, instructional strategy and student activity lead to learning outcomes.
Keywords: experimental design; survey research; case studies; meta analysis; impact on learning
Introduction
Many governments have continued to fund rapid expansion in the use of IT in schools and, not unreasonably, want to know if the positive impact of IT on learning is commensurate with investment (Tolmie, 2001). Politicians want research evidence to address the question ‘was it worth it?’ (Pittard, 2004). However, to pose the question is much easier than to answer it. This has to do with the potential and limitations of available research approaches as well as with measuring ‘learning’.
The aim of this chapter is to provide an overview of progress in researching the impact of Information Technology (IT) on students’ learning. When considering the last two decades of research in this area a number of recurring issues emerge. These issues have led many to call for a paradigm shift in our approach to educational research. However, the nature of the shift called for is, itself, controversial as it relates to alternative perspectives on how educational research should be conducted, how learning should be measured, and how we should approach teaching and learning if we are to maximise potential. In addition, the ways in which different authors view the role of IT within learning and teaching processes, also affects how they evaluate the impact of IT on learning.
In presenting this review I discuss some of these alternative perspectives and, in doing so, suggest what is known about the impact of IT on learning, gaps in our understanding and future directions for research.
Impact of IT on Learning – Experimental Research Designs
There is debate as to the best research approach to take when measuring the impact of IT on learning. Those advocating experimental methods often regard randomised controlled trials (RCTs) as the ‘gold standard’. The aim in experimental methods is to compare the performance of students assigned to an intervention group using IT with the performance of students exposed to more traditional methods. In these studies ‘ learning’ is often reduced to student performance on a test.
Ainsworth and Grimshaw (2004) point out that evaluations of computer based Intelligent Tutoring Systems (ITSs) have achieved effect sizes of between 0.4 and 1 compared to classroom teaching (whilst one-to-one tutoring by expert tutors produces on average an effect size of 2 according to Bloom, 2004). However, such effects are not consistently gained. Ainsworth & Grimshaw (2004) found when evaluating the REDEEM Intelligent Tutoring Authoring System the effect size was highly variable from 0.1 to 1.33 (mean 0.51). Effect sizes for other Computer Aided Instruction software (CAI) are often reported to be even more variable or negative (Andrews, 2004; Eng, 2005). Moreover, experimental control is easier to achieve for self-contained computer-based learning software used by individuals than it is for more open social learning in classroom environments where activity at the computer is just one activity amongst many. REDEEM worked best when teachers used the flexibility of the design to add additional interactivity and when students took advantage of this extra interactivity by answering questions or writing written notes whilst learning (Ainsworth & Fleming, 2006).
This study illustrates a number of important issues affecting the value of comparative experimental and quasi-experimental studies, one of the principal points being that the local conditions of use are of central importance. Comparative research designs that use different groups of students and/or tutors can be confounded by individual differences in the characteristics of student and tutor. Local contextual variables associated with the implementation of instructional strategies can also impact on success. As Ringstaff and Kelley (2002) point out, classrooms are not experimental laboratories where scientists can compare the effectiveness of technology to traditional instructional methods while holding all other variables constant. Therefore, whilst the RCTs may still be regarded as the ‘gold standard’ by many, the difficulty of isolating the role of the computer-based element in the learning context can undermine the value of conclusions drawn (Tolmie, 2001; Pittard, 2004; Cook, 2006).
Joy and Garcia (2000) argue that inability to control such variables can make it less likely that researchers will find significant differences between computer-based treatment groups and no-treatment groups. Similarly, Tolmie (2001) argues that it is unlikely, given the complexity of the research context, that the addition of any new element into the classroom environment could have a straightforward impact on learning.
In adopting the quasi-experimental comparative approach, there are often also ethical issues concerned with the ways students may access resources at particular times. Indeed, setting up these kinds of study in schools and colleges is notoriously difficult because educational practitioners are concerned that the research should not interfere with day-to-day classroom practice. In particular, research should not burden or disadvantage some students more than others. These problems account in part, for the scarcity of well-controlled comparative studies that measure the impact of IT on learning.
Joy and Garcia (2000) conclude that the outlook for comparative studies is bleak and we should instead investigate particular combinations of instructional strategies, media and activities that produce desired learning outcomes. Robust measurement of impact is important but RCTs should perhaps be supplemented with richer ‘added value’ methods (Pittard, 2004). Tolmie (2001) also suggests more context sensitive approaches are needed which consider the interplay of technology with existing practice.
However, alternatives to the quasi-experimental approach are not without their own difficulties. Rogers & Finlayson (2004) agree interpreting quantitative data from comparative studies and large-scale surveys is often problematic yet, qualitative studies have also been criticized for the small number of students they involve and the special conditions which make drawing general conclusions difficult.
Researchers on the ImpaCT2 project (Harrison, Comber, Fisher, Haw, Lewin and Lunzar et al., 2002) proposed a socially contextualised model of research that recognises that IT experience is only part of a larger picture of pupils’ interaction with computer-based technologies. Consequently they looked at the overlap between out-of-school learning and school-based learning and attempted to assess the impact of some of these additional influences through collection of qualitative data. Kennewell (2003), similarly argues that IT should be studied alongside other variables in natural pedagogic settings using both quantitative and qualitative research methods. Later in this chapter we explore further what large-scale surveys and meta-analysis of case-based research can tell us about the impact of IT on learning.
Measuring the Impact of Learning
An associated problem in drawing general conclusions concerning the impact of IT on learning relates to how we measure learning. Thus, one of the first casualties of introducing ITs into the curriculum is this original alignment of aims and objectives with delivery and assessment strategies (Noss & Pachler, 1999; Ellaway, 2006). In short, the delivery method has an affect on what is learned and how it can reasonably be assessed. This means that it is difficult to prepare a common form of assessment that can fairly compare the traditional course with the computer-based course. This problem was of particular concern to those involved with the ImpaCT2 project (Harrison et al., 2002) discussed further later in this chapter.
Further, several authors have argued that knowledge gained through IT may be different in nature from that gained through other methods (Laurillard, 1978; Cheng, 1999). This is not to say that one or the other is necessarily better but that they are different. Thus Cheng (1999) notes that the representations used for learning in science and mathematics can substantially determine what is learnt and how easily this occurs. Clements (2000) argues that representations used with computational media offer unique opportunities for problem and project-oriented pedagogical approaches that can catalyse pedagogic innovation. Hammond (1994) concludes that this kind of innovation makes it difficult to compare ‘with’ and ‘without’ IT conditions since introducing IT changes the nature of the learning activity. As McCormick (2004) points out, research in assessment has not kept up, for example, with the new learning opportunities offered by IT through collaborative construction of multimedia or web-based products. Such products may employ different purposes, skills and audiences from those of traditional handwritten essay.
When taking a quasi-experimental approach to research many studies have addressed this problem by devising their own assessments that more validly reflect the skills and knowledge to be compared. However, the point remains that the introduction of IT very often changes the nature of the learning tasks and outcomes for good or ill and we need to be sure we are sensitive both to evaluating what is actually learned (in both conditions) and to whether what has been learnt is equally valuable relative to our educational aims.
Impact on Learning – Survey-Based Approaches
In this section the aim is to examine what is known about the impact of IT on learning from survey-based approaches. A number of large-scale surveys have been commissioned to evaluate the impact of funding on learning (Harrison et al., 2002; Conlon & Simpson, 2003; Butt, Fielding, Foster, Gunter, Lance and Lock et al., 2003; Thomas, Butt, Fielding, Foster, Gunter and Lance et al., 2003, 2004; Burns & Ungerleider, 2003; Hennessey & Deaney, 2004; Underwood, Ault, Banyard, Bird, Dillon and Hayes et al. 2005). Such surveys often seek to discover the impact of IT by comparing a number of case schools. This enables researchers to study authentic use of IT by teachers and learners without the need for experimental manipulation and yet still make more general claims than can be provided by a single local case study.
The ImpaCT2 project (Harrison et al., 2002) involved a large-scale survey of the use of IT in UK primary and secondary schools to see what effect this investment was having. Strand 1 of the study looked at baseline tests administered at the beginning and end of each key stage (standard national attainment tests) alongside performance on GCSEs (qualifications at 16 years) to try to determine evidence of the value added to the education of children. Data related to use of IT at home and at school, were further analysed in relation to gender, ethnicity and socio-economic factors. Overall the project found a small positive relationship between GSCE performance and IT use with no cases where there was a significant negative relationship i.e. no case where there was a statistically significant advantage for lower IT use. However, there was no consistent advantage for higher IT use in all subjects or at all key stages.
The quantitative data alone raised many questions. However, the authors concluded that the most likely reasons for lack of consistency were lack of constructive alignment between assessment and learning and effective teaching i.e. the factor most likely to impact on learning remained the quality of the teaching (with or without IT). Because the results of quantitative survey-based research are often confusing in relation to the impact of IT on learning, there is a need to study a range of other variables that may be implicated through survey design. Use of and access to IT in schools are perhaps the two related variables that have been studied most.
The Transforming the School Workforce (TSW) Pathfinder project in the UK (Thomas et al. 2004) was not designed to look at the impact of IT on learning per se but rather the ways in which IT was being used in schools. The survey did record through questionnaires and interviews the use of IT in school and at home. What this survey principally revealed was that despite a push toward integrating IT in to the classroom, use of computers for learning and teaching remained relatively modest. With notable exceptions, teachers were mainly using IT to support basic literacy, numeracy and IT skills with many fewer examples of using IT to support teaching in other subjects, for collaborative work, extended project work and discussion. The main computer applications used were word-processing, presentation software and the Internet. These applications were used mostly to support teachers in lesson preparation rather than by children in the classroom.
The IT Testbed baseline project (Butt et al., 2003) found similar results. Both studies suggested from a quantitative perspective a disappointing range of IT resources being used in schools. From quantitative data it was difficult to tell why this was the case although staff recognised a need for additional training in using IT for pedagogic purposes. However, in both surveys there were outstanding examples such as the use of specialist multimedia software (e.g. CAD and data-logging) to improve and extend the curriculum in art and design and in science classes. There were also examples of use of the Interactive Whiteboard, Desktop Publishing and PowerPoint software for extended project work and presentations of children’s work in a range of subject classes.
Similar findings emerge from international studies: Conlon and Simpson (2003) compared the introduction of IT in Scottish classrooms with introduction of IT in schools in Silicon Valley and found similarities in access to resources at home and at school and in the main uses of the technology for word-processing, email and searching the Internet. They also showed (as in IT Testbeds Baseline study and TSW Pathfinder studies) that teachers were not inherently resistant to the use of the technology. Around half of teachers regularly used the computer for report writing and preparing lessons but use of computers by pupils in schools was much more limited. The computer was seldom used in class unless the subject studied was technology intensive. Students in secondary schools used computers in class only once or twice a week and the majority of teachers use technology to reinforce existing patterns of teaching rather than to innovate.