Chapter 10.6

Methods for Large Scale International Studies on ICT in Education

Willem Pelgrum

Danish University of Education

Copenhagen, Danmark

Tjeerd Plomp

University of Twente

Enschede, The Netherlands

Abstract: International comparative assessment is a research method applied for describing and analyzing educational processes and outcomes. They are used to ‘describe the status quo’ in educational systems from an international comparative perspective. This chapter reviews different large scale international comparative studies on IT in education and particularly addresses The Computers in Education Study (COMPED) and the Second International in Education Studies (SITES). The chapter addresses the major concepts and indicators that are addressed, design issues, potential outputs and recommendations for future international comparative studies on ICT in education.

Keywords: Worldwide International Comparative Assessments (WISCEASs); Computer in Education Study, Second Information Technology in Education Study; ICT indicators

1. Introduction

International comparative assessment is a research method applied for describing and analyzing educational processes and outcomes. They are used to ‘describe the status quo’ in educational systems from an international comparative perspective. This type of assessment started in the 1960s and has been mainly focused on core subjects, such as mathematics, science and reading (literacy). Over time, assessments were also conducted that were in particular focusing on the use of ICT in education, the first one being the Computers in Education study (CompEd) conducted in the late 1980s and early 1990s under the auspices of the International Association for the Evaluation of Educational Achievement (IEA). (Pelgrum & Plomp, 1993).

Nowadays different types of international comparative assessments exist, such as:

-  Projects by international organizations, e.g.:

-  Projects funded by the European Commission (Eurydice, 2004)

-  Exchange of experiences via the World Summits on the Information Society by United Nations Educational Scientific and Cultural Organization [UNESCO] (see: www.unesco.org)

-  Secondary analyses of assessments conducted by the Organization for Economical Cooperation and Development (OECD, 2006)

-  WORLDBANK (Hepp et al, 2004)

-  Case studies of selected schools in a number of different countries (e.g. SITES-Module 2, a study looking at innovative pedagogical practices utilizing ICT; Kozma, 2003)

International assessments using national representative samples of schools, teachers, and/or students, and focusing on collecting and producing comparative indicators regarding educational processes and outcomes.

This chapter will in particular address the last category of large scale international comparative assessments. In this chapter these projects are labelled as ‘assessments’, because of their primary function as explained in Figure 1. These assessments also exist on a regional basis (for instance: Bonnet (2004), projects of the Southern and Eastern Africa Consortium for Monitoring Educational Quality (SACMEQ, see: http://www.sacmeq.org/links.htm). This chapter will primarily focus on large scale international assessments that are conducted on a worldwide scale by the International Association for the Evaluation of Educational Achievement (IEA) and the Organization for Economical Cooperation and Development (OECD). We will call these types of assessments Worldwide International Statistical Comparative Educational Assessments (WISCEAs) in order to distinguish them from qualitative assessments and from regional statistical comparative assessments.

In this chapter, the following questions will be addressed:

-  Which WISCEAs addressed issues related to ICT?

-  What were the major concepts and indicators that were addressed in these WISCEAs?

-  How are WISCEAS designed?

-  What potential outputs are provided by ICT-related WISCEAs?

-  Which recommendations for ICT- related WISCEAs can be given?

These questions are covered in the following sections, starting with a short description of the history of WISCEAs in terms of purposes, methods, audiences, and organizations in particular with regard to ICT related matters. This will be followed by zooming in on the research questions underlying WISCEAs, conceptual and design issues. Finally potential outputs of ICT-related WISCEAs will be presented as well as recommendations for future ICT-related WISCEAs.

2. Historical Sketch of ICT-Related WISCEAs

While scientific interest was the main purpose of the first WISCEAs initiated in the 1960s by the IEA, nowadays policy interests are the main driver for this type of assessment as is witnessed by the fact that the OECD has become a main player in this field by conducting the so-called Program for International Student Assessment-PISA (e.g. OECD, 2004). Although these organizations also run other types of comparative assessments (policy analyses, qualitative studies) they are most widely known for their ‘league tables’ showing rank ordered country scores on achievement tests. Since the first feasibility study of IEA on mathematics was conducted in the 1960s, more than 15 WISCEAs were reported by the IEA up until 2005. The OECD PISA assessment has been conducted and reported twice. Most WISCEAs were focused on basic school subjects such as mathematics, science and reading.

The International Association for the Evaluation of Educational Achievement (IEA) has a long tradition in conducting WISCEAs that are dedicated to ICT in education. The first one was the so-called Computer in Education study (CompEd) that was conducted in two phases between 1987 and 1992 (Pelgrum & Plomp, 1993; Pelgrum, Janssen Reinen & Plomp, 1993). The first phase targeted schools and teachers (users as well as non-users) at the primary and secondary level, while the second phase also included samples of students. The indicators collected in this study had high reliabilities (e.g. attitudes of school leaders and teachers, the extent to which computers were integrated in teaching and learning, self-rated competencies of teachers, ICT-competencies of students).

The second wave of ICT-WISCEAs consisted of the so-called Second Information Technology in Education Studies (SITES), which started with Module 1 (SITES Module 1) (1998-1999; see Pelgrum & Anderson, 1999, 2002). In SITES Module 1 a large number of indicators were collected at school level in 26 education systems (in primary-, lower secondary and upper secondary schools). Pelgrum & Anderson (2002) showed that many of the indicators from several concept domains (curriculum, infrastructure, staff development, management and organization and innovative practices) had high reliabilities within as well as across countries. This was followed by SITES Module 2, qualitative case studies (174 in 28 countries) of ICT-supported pedagogical innovations (Kozma, 2003). SITES2006 was the most recent ICT related WISCEA of IEA. In this assessment (which was focussed on national representative samples of lower secondary schools and mathematics and science teachers from 22 education systems) a large number of high quality indicators were collected that relate to the concepts that are mentioned in Figure 3.

From CompEd to the three SITES studies a shift can be observed in interest among policy makers and researchers in the focus of ICT-related WISCEAs. The focus of interest has shifted from counting of computers and inventorying the use of computers and obstacles to use IT to how pedagogical practices are changing and adapted to the needs of education in a knowledge or information society (‘21st century’), and how IT is used in supporting these practices.

Recent WISCEAs that were focussed on core school subjects (Trends in International Mathematics and Science Study [TIMSS] 2003 and PISA 2003) also contained indicators of ICT-availability and ICT-use, but the ICT-related indicators in these assessments cover only a small number of aspects, as compared to the ICT-dedicated WISCEAs of IEA in which typically hundreds of variables are measured.

3. Questions Underlying ICT-Related WISCEAs

WISCEAs may have several functions. Howie and Plomp (2005) distinguish the following functions: description (mirror), benchmarking, monitoring, enlightenment, understanding and cross-national research. Some of these functions (such as benchmarking, monitoring, understanding, cross national research) can be explicitly addressed by the research design, while other functions are more or less collateral (mirror, enlightenment). In general a basic purpose of WISCEAs is to contribute to educational improvement. With regard to ICT-related WISCEAs, major questions are for instance:

-  Do students have sufficient opportunities to use ICT for learning purposes?

-  Does the use of ICT for teaching and learning result in higher school effectiveness?

-  Can the use of ICT contribute to diminishing inequities?

Target audiences of the WISCEAs are mainly macro-level actors: policy makers, inspectorate, researchers, and special interest groups in educational practice. Although, in principle, the outcome from WISCEAs are also of interest to other actors (e.g. teachers and parents), the international reports of WISCEAs are not specifically targeting questions that are relevant for these groups (e.g. how well is my school performing?).

The design and methodology of WISCEAs are determined by policy and research questions that are posed at their start. In generic terms one may distinguish descriptive and analytical questions. Descriptive questions related to ICT are e.g.: “is the access to ICT in schools measuring up with other countries? to what extent (comparatively) are our teachers skilled for using ICT in teaching and learning?”. Analytical questions are usually phrased in terms of potential causal factors, for instance, in generic terms: “To what extent does the use of ICT result in changes in the way that teaching and learning is taking place?”

Although assessment is the primary function of WISCEAs, several secondary functions can be distinguished as is illustrated in Figure 1, which shows that policy issues and derived research questions constitute (ideally) the basis for designing an assessment. The results of the assessments are used for making inferences about strengths and weaknesses of the education systems of the participating countries (evaluation and reflection). Once weaknesses are spotted analysis activities need to be undertaken in order to find the potential causes (diagnosis). This part of a study is guided by analytical research questions. The results of such analyses of the available data (in countries sometimes supplemented with additional data collection) may be used for undertaking interventions aimed at improvement of the education system.

Figure 1

Policy cycle showing several functions of WISCEAs.

Figure 1 illustrates that posing analytical questions a priori (that is when designing a WISCEA) can be problematic when the descriptive results are not yet available. Therefore analytical questions in WISCEAS are often based on hypothetical outcomes inferred from reviews of research literature, or they are generated a posteriori and based on the observations resulting from the descriptions.

Policy and research questions form the start of any WISCEA. Examples of policy questions are:

-  What ICT equipment is needed in educational practice?

-  Which competencies do teachers need in order to adequately integrate ICT in their lessons?

-  Examples of research questions are:

-  Which ICT infrastructure is available in schools and to what extent do educational practitioners perceive this as sufficient?

-  What are the ICT related competencies of teachers and in which areas do they need further training?

Concepts that are referred to in these questions (in the examples above: infrastructure, teacher competencies) are organized in a conceptual framework which on an abstract level forms the link to indicators and constitute the basis for developing instruments.

In the next sections each of these aspects are discussed briefly.

4. Conceptual Frameworks

Conceptual frameworks mainly have a descriptive and structuring function: they more or less describe the ‘landscape’ contours of the area that will be assessed and at the same time provide a structure for deriving relevant constructs. For instance an important policy issue is equity of access to ICT (concept), which may be defined in terms of availability to students of ICT at home and school, their gender and the minority group to which they belong (constructs) for which indicators can be developed (such as responses to questions like ‘do you have a computer at home which you can use for school work’, ‘what is your gender’ or ‘in which country were you born’). In the context of this type of study, an indicator is often called ‘variable’ or ‘statistic’.

An important distinction is between independent and dependent indicators, which both can be used for descriptive purposes, whilst the independent variables play an important role in addressing the analyses questions shown in Figure 2 below.

Figure 2

Links between independent and dependent indicators

The construction of a conceptual framework for a WISCEA is a complicated process that is usually not a linear one, but rather based on an initial proposal developed by an international coordinating centre which is further elaborated via intensive interaction with the representatives of participating countries who together take decisions about (political) relevance and feasibility. Quite often, when initial concepts are further operationalized, it appears that not always suitable and cost-effective indicators can be found for important concepts (e.g. life long learning skills of students). Hence, initial conceptual frameworks often need to be revised during the elaboration of a WISCEA-design and after the pilot-testing of instruments in the participating countries. Furthermore, time and budget constraints prevent all the favourite issues of the participating countries from being addressed. This problem is often solved by allowing countries to add so-called national options to the international assessment measures.

An example of a conceptual framework of an ICT-related WISCEA is shown in Figure 3. This framework is taken from SITES2006, an assessment for which the data were collected in 2006 in 22 education systems. The starting point for this assessment was the policy issue of pedagogical innovation and the role ICT is playing in these innovations. This issue has grown from the more than 20 years of experience in a substantial number of countries regarding the introduction and use of ICT in education.

The SITES studies focussed on several concepts that, according to earlier research, influence the implementation of pedagogical innovations. These were operationalized in SITES2006 in terms of pedagogical paradigms. A contrasting pair was distinguished: traditional pedagogy (teacher-directed, whole class teaching) and emerging pedagogical approaches (more responsibility and autonomy of students for their learning, working in small groups, lifelong learning skills). As illustrated in Figure 3, it was hypothesized that the extent to which these two approaches exist would be influenced by several (often interacting) ‘conditions’ at the school and teacher level. ICT was conceived as one of the important conditions.