Qualitative approaches to large scale studies and students’ achievements in science and mathematics – An Australian and Nordic perspective
Associate Professor, Anders Jakobsson, Malmö University, Sweden
Associate Professor, Sue Thomsson, Mellbourne University, ASER, Australia
PhD, Kylie Hillman, Mellbourne University, ASER, Australia
PhD, Eva Davidsson: DPU Köpenhamn/Malmö University, Sweden
Large scale studies play an increasing role in educational politics and results from surveys such as TIMSS and PISA are extensively used in medial debates about students’ knowledge in science and mathematics. Although this debate does not usually shed light on the more extensive quantitative analyses, there is a lack of investigations which aim at exploring what is possible to conclude or not to conclude from these analyses. There is also a need for more detailed discussions about what trends could be discern concerning students’ knowledge in science and mathematics.
The aim of this symposium is therefore to highlight and discuss different approaches to how data from large scale studies could be used for additional analyses in order to increase our understanding of students’ knowledge in science and mathematics, but also to explore possible longitudinal trends, hidden in the data material. This is done from different examples, which all use additional analysis in order to sort out complex issues behind, maybe at first glance, simple relationships. These examples could constitute possible approaches for qualitative analysis on quantitative data. For example, the relationship between low performers in grade 9 and school completion could be questioned as this image seems to be more complex. One group of Australian students, belonging to the low performers on the PISA 2003 test in mathematics, complete university studies and go to stable employments. By identifying underlying causes of the more successive low-performers this could improve the targeting of resources in schools.
Another example of a complex issue, emanating from the results of large scale studies, concerns computer-based assessment in science (CBAS) and gender issues. Both PISA and CBAS aim to assess scientific literacy, but the results of CBAS show a large gender difference in all three participating countries although none or smaller differences on the PISA paper and pencil-test. An analysis of the CBAS questions reveals gendered contexts where males were acting doing boys’ activities. There were also an overrepresentation of physics and technology, which girls often refer to as more difficult subjects.
Yet another example concerns the significant decline of Swedish students’ performances on both the PISA and the TIMSS studies between 1995 and 2006. A more thorough analysis of reoccurring PISA questions shows a great variance in to what extent the Swedish students succeed to solve specific problems. Furthermore, it seems as the students’ ability to argue scientifically has significantly decreased.
Paper 1: Abstract
Tracking PISA low-achievers: does a low score predict failure?
Associate Professor, Sue Thomsson, Mellbourne University, ASER, Australia
PhD, Kylie Hillman, Mellbourne University, ASER, Australia
MCEETYA has defined proficiency level 3 on the PISA scales as representing a baseline level of literacy at which students begin to demonstrate the competencies that will enable them to actively participate in life situations. It could well be argued that this is a group of young people most vulnerable in a context of insecurity and precarity. While a great deal of longitudinal research has reported a strong relationship between achievement by Year 9 and school completion and participation in post-secondary education and training, and that low achievers are more likely to leave school early, to enter apprenticeships or to attempt to enter the labour force immediately upon leaving school. Nevertheless, this relationship is not always so simple; not all Year 9 low achievers fail to complete Year 12, indeed many continue with their education and training at TAFE or university and go on to stable employment.
In Australia, the PISA 2003 sample became a commencing cohort for the Longitudinal Surveys of Australian Youth (LSAY). The LSAY data provide a unique opportunity to investigate the pathways of young people who scored poorly on the PISA mathematics tests in 2003 in the later years of secondary school, and to relate their outcomes to other variables, particularly sociodemographic background variables, gender and interests as measured in PISA.
This paper will attempt to identify what differentiates low-performing students who have positive and successful outcomes from those who have less successful outcomes. Identifying the factors that contribute to the ‘resilience’ of these low performing young people has implications for policy development in two main ways: firstly, identifying the differences between low-performing students who go on to have positive outcomes and those who do not can be used to improve the targeting of resources and assistance towards student and schools who need it most (improving the efficiency of the use of resources); secondly, information about school level factors that are related to better outcomes in later life for low performing students succeed in later life can be used to improve school environments and thus outcomes for all students (improving the effectiveness of schooling).
Paper 2: Abstract
Examining computer-based assessment and gender issues – what constitute the large gender differences?
PhD, Eva Davidsson: DPU Köpenhamn/Malmö University, Sweden
The Computer-Based Assessment in Science (CBAS) was introduced within the frame of the Program for International Assessment (PISA) studies in order to reduce the reading load while retaining the scientific content of the tasks. However, the results of the Danish students revealed a larger gender difference on the CBAS test compared to the traditional PISA paper and pencil-test. Also the results from the other two participating countries Korea and Iceland showed a significant gender difference on the CBAS test and no significant difference between the boys and the girls on the paper and pencil-test. But what explanations could there be to these results? Why do boys tend to perform better than girls on the CBAS test?
An additional study was carried out in order to explore possible explanations to the large gender difference. Boys’ and girls’ answers on different items were compared with regard to the content and contexts of the items. Furthermore, the questions were analysed considering what competencies (e.g. observing, posing questions, modelling) the students need to use in order solve the problems. When it comes to the content of the items, the analysis revealed that items related to physics and technology were overrepresented in the CBAS test. It has previously been shown that girls consider physics as more difficult than boys do and could therefore constitute an explanation to the gender differences. The contexts of the tasks were mostly related to an every-day or a society context such as hatching chickens, car safety or nuclear power. However, there were only few items, which could be considered as related to a personal context in general or girls’ personal contexts in particular. Where the context could be considered as personal, it were boys or men acting, doing traditional boys activities, such as skidding a bike. Previous studies have concluded that girls, more than boys, tend to find unfamiliar context more difficult to consider and this could also be an explanation to the gender difference.
When it comes to the competencies the students need to use in order to solve the problems, the analysis reveals that it is possible to test for example observing and modelling. The results indicate, however, that the different competencies tested in CBAS seem to play a more limited role on the gender differences compared to the questions’ content and contexts. The conclusions also comprise implications for further development of computer-based assessment in science.
Paper 3: Abstract
Exploring longitudinal trends in students’ achievements on large scale science studies
Associate Professor, Anders Jakobsson, Malmö University, Sweden
A common approach for describing the results of the PISA and TIMSS-surveys is to calculate a mean value of the students’ achievements in order to compare these values between the participating countries. If we use this way of describing the results of the PISA 2006 the Swedish students achieved a mean value in science, which does not differ significantly from the mean value of all OECD countries. However, compared to the two previous PISA studies, the students show a significant decline. The TIMSS studies of 1995 and 2003 imply a similar development.
This way of describing the results play an increasingly important role in monitoring educational performance and in the political discussion and decision-making. But, even if we conduct our selves careful to what conclusions it is possible to draw concerning the Swedish students’ performances, these results still manifest a development of some kind. Some relevant questions are if there exist content related explanations or other causal explanations to these trends? Could these trends be explained by a changing view of knowledge in curriculums such as from focusing on de-contextualised conceptual knowledge to an increasing focus on contextual knowledge of science?
This paper attempts to discuss in what ways it may be possible to use and analyse the PISA data, in order to draw conclusions about students’ knowledge in a longitudinal perspective. A study was carried out in order to conduct a content analysis and analysis of the abilities the students need in order to solve the problems. Furthermore, the students’ achievements on identical items on PISA 2003 and 2006 were analysed. Our preliminary results indicate, for example, that there exist large variances in to what extent the students succeeded to solve specific problems between the two measurements. Furthermore, it seems as the Swedish students’ ability to use scientific knowledge as resources for arguing scientifically have decreased. These results have not been reported in the national reports and may contribute to understand the data from a new perspective which, in turn, could have profound importance on the implications from PISA measurements. The expectation in the future is that a thoroughly analysis in this way can clarify other similar tendencies in the material and deepen our understanding of how to make valid conclusions about students’ knowledge in a longitudinal perspective from large scale studies.