1
Science Education: Research and Curriculum Development in the UK
Edgar William Jenkins, Emeritus Professor
University of Leeds, Centre for Studies in Science and Mathematics Education
Paper presented to the Swedish National Graduate School in Science and Technology Education, Linkoping University, Norrkoping campus, Sweden, January 27th 2004
I want to begin with two introductory comments about the title of my talk. As you will see, it refers to research and curriculum development within the United Kingdom. This presents something of a difficulty, not least of scale, since the United Kingdom embraces a number of education systems that differ from one another in significant ways. Responsibility for education in Scotland, for example, lies with the Scottish Parliament, via a Scottish Minister for Education and a Scottish Executive Education Department. There is no national curriculum, so that curriculum documents have an advisory rather than a prescriptive status as in England, although, in practice, there is widespread agreement about what should be taught. The terminology used to describe the stage of pupils’ schooling is also different from that used in England as are the structure and titles of the various school examinations. Undergraduate courses in Scottish universities are also a year longer than those in England. In Northern Ireland, where, as you may know, the devolution of government from London has presented problems, there is a Council for the Curriculum, Examinations and Assessment. This Council has overall responsibility for these aspects of schooling in the province where, we should note, selection of pupils for secondary education has been retained for much longer than elsewhere in the United Kingdom. In Wales, which also has its own mode of governance of education, schools have to accommodate the teaching of Welsh but, apart from this, structural differences from England are relatively modest. It is England, of course, that has the largest education system. The structure and content of the curriculum are laid down by statute, as are the arrangements for assessing pupils’ work at various stages of schooling. It is perhaps worth reminding ourselves what that structure is. Schooling is compulsory from the age of 5 to 16 and the period of compulsory schooling is divided up into four Key Stages. Pupils are assessed at the end of each Key Stage and most pupils transfer from primary to secondary school at the age of 11.
KEY STAGE 15-7 years
KEY STAGE 27-11
KEY STAGE 311-14
KEY STAGE 414-16
In the time available, it would obviously be impossible to discuss developments with the United Kingdom as a whole. I will, therefore, be selective and focus attention on curriculum initiatives that seem to me of particular interest.
My second introductory comment relates to the reference in the title of my talk to research and curriculum development. This implies that these differ from each other in some fundamental or significant way, although there may, of course, be some interaction between them as when people talk about ‘research based curriculum development’. For the moment, I will ignore any distinction between research and curriculum development and, purely for reasons of convenience, deal with them separately. Although I shall present examples from the United Kingdom, I shall draw attention to what seem to me to be some issues of wider significance.
Research
Perhaps the most obvious point to make about research in science education in the UK, as in the rest of Europe, is its relative newness. There is no long tradition of work in this field and it was the second half of the 1960s that saw science education secure a place as a field of teaching of research and teaching within higher education in the United Kingdom. Our first professorial appointments stem from that time and they owed much to the attempts then being made to introduce and support large-scale reform of the school science curriculum. This was significant for the location of science education research in the UK within teacher training, rather than science, departments and for the profile of much of the research that came to be undertaken. It always seems to me, for example, that the work done by my former colleague, Rosalind Driver, would not have been done in a psychology department, even though her work was concerned with children’s learning. The questions that she addressed reflected both her own experience and interests as a former schoolteacher of physics and the research environment within which she worked, namely a department with a strong commitment to teacher initial and in-service teacher education. Those questions are unlikely, in my view, to have been asked in a university psychology department.
If science education is to flourish as a field of study within higher education, it requires the usual scholarly apparatus to support it. That apparatus was quickly put in place in the later 1960s and early 1970s, with the publication of research journals, the organising of international conferences, the development of undergraduate and postgraduate courses, and the creation of specialised science education centres within universities in the United Kingdom.
This relative newness of research in science education prompts three comments. First, it suggests a need for some caution in assessing the contribution that such research might reasonably be expected to make to educational policy and practice. Secondly, it stands in marked contrast to the much longer and very different history of science education research within the USA. Here the research tradition has, until recently, been dominated by work that is almost exclusively quantitative and empirical in its methodology and largely empiricist and positivist in its psychology and philosophy. Thirdly, newness necessarily entails substantial diversity.
If I look a little more widely than the United Kingdom for a moment, it is difficult not to be impressed by the wide range of topics that researchers in science education have chosen to investigate in the last thirty or so years. It includes work relating to policy creation and realisation, history of science education, teachers, students, schools, museums, the print and broadcast media, textbooks, educational technology, information and communication technologies, pedagogy, curriculum, assessment and evaluation. Within each of these fields, the diversity is compounded. For example, few fields, if any have ignored gender issues, although, in many cases, assertions relating to gender have failed to accommodate the cultural experiences of girls and women in the developing, rather than the industrialised, world.
In the UK, as elsewhere, the bulk of the research has been concerned with science education at school, rather than at college or university level, and with learning in formal, rather than informal, settings, although this is slowly changing. There has also been much more emphasis upon research into learning than into teaching. Until recently, it was also the case in the United Kingdom that more research attention was given to secondary, rather than primary, education. This, too, is changing, partly because science has now become a compulsory part of the primary school curriculum and partly because the emergence of an all graduate profession has increased the supply of researchers with a background in primary teaching.
In the last couple of decades or so, science education research in the UK, as in many other countries, has been dominated by studies of children’s understanding of natural phenomena which have contributed to the boom in ‘constructivist’ studies reported in the principal research journals. Research attention has been directed principally at children’s understanding of such concepts as mass, acceleration, chemical change, gravity and evolution. Interdisciplinary concepts that characterise many of the public discussions of science in the media, e.g., bio-diversity, sustainable development and various measures of personal or environmental risk, have been almost entirely ignored. So, too, have concepts like geological or cosmological time.
So-called constructivist ideas have come to play a major role in science education debates, although their influence on practice remains modest, especially at the level of secondary schooling. There is also disagreement at a fundamental level about aspects of constructivism with important shifts over time in the focus of attention of those working within this broad tradition. Early exploratory work, followed by replication studies, gave way in the 1980s to something of an emphasis upon how students’ ideas about a range of natural phenomena might be changed. Today, there is an interest in how students acquire these ideas, together with a greater understanding that ‘alternative’ and scientifically incorrect models of understanding are adequate for many everyday purposes, a view borne out by work in field as different as the public understanding of science, the psychology of so-called ‘just plain folks’, and the nature of practical and professional knowledge.
What, therefore, might be the concerns about the present state of science education research in the United Kingdom? In trying to answer this question, I need to emphasise that the best of such research stands comparison with what is done anywhere in the world. Moreover, the growing co-operation between science education researchers within Europe is a particularly welcome development as is the funding, via the European Union, of Marie Curie Research Fellowships. Even so, there are problems, by no means confined to the United Kingdom, among which I would identify the following.
- The lack of research funding and the declining number of researchers.
Several factors are operating here. They include the reduced level of funding for higher education, the retirement and non-replacement of many of those who were appointed in the curriculum development era of a generation ago, and changes in the funding arrangements to support full-or part-time research students.
- Research and related expertise are spread too thinly.
Although there is room for the individual scholar working in relative isolation, experience in my own institution leads me to believe that there is much to be gained from having Centres of excellence within which there is a critical mass of researchers. Those researchers have to be able to command intellectual and professional respect with very different communities, most obviously those teaching in schools, their colleagues in the science departments within universities and those in government responsible for science education policy. Not surprisingly, they are sometimes pulled in different directions.
- Concern about the relevance of science education research to policy making and practice.
At the heart of the concern here are questions of how, and by whom, a research agenda is constructed and whether that research agenda is the same for the researchers themselves and, for example, government or teachers in schools. The evidence from the United Kingdom is not entirely encouraging, with one recent commentator claiming that educational research in general was predominantly ‘supply i.e., researcher driven’. In the 1970s, for example, government asked a seemingly straightforward question. Are the standards of science teaching in schools going up or down? This led to the setting up of a large-scale and expensive research Assessment of Performance Unit (APU). The research community quite legitimately asked the question, 'What do you mean by ‘standards’? In the absence of an answer from the politicians, it was left to the researchers to construct an operational answer. They did so with immense skill but, after many years of highly detailed and demanding research, the answer was not much use to politicians.
It would be wrong, however, to conclude either that the research was a waste of time or that it had no impact on policy. Many of the test instruments developed by the APU influenced the national curriculum that developed in England and Wales after 1989, although it can be argued that those instruments were both misunderstood and misused. It can also, I think, be argued, that the failure of the Assessment of Performance Unit to come up with the sort of information that policy makers wanted helps to explain the enthusiasm which greeted later studies, such as TIMSS and PISA. The results seem simple to understand, their policy implications clear and, of course, they make good newspaper headlines. Interestingly, the science education community in the UK has played a very small role in these studies, especially in TIMSS which is driven by psychometric rather than curriculum considerations. Everyone, of course, knows the limitations of these international comparisons but, as one civil servant once said to me, ‘What is there that is better?’
- Concern about the standards of some of the research that has been published.
Concern of this kind emerged as an issue in the United Kingdom in the last few years, but it is by no means confined to science education. Underlying it are questions like ‘Is there any evidence that educational research has enhanced the learning that takes place in colleges, schools, universities and other educational institutions’ and ‘Does educational research represent ‘value for money’’?
One study in the UK identified four aspects of this concern over standards, relating to research focus, methodology, non-empirical research and bias on the part of researchers. One commentator judged that too much educational research was irrelevant to practice, uncoordinated with any preceding or follow-up research and served only to clutter up academic journals that nobody reads. There has, of course, been a response from the research community but it is clear that the concern is not confined to the United Kingdom. One critic, writing of research in chemical education in Europe, claims that it is not of a sufficiently high standard with respect to methodology and the application of results.
The debate about standards prompts questions about the nature of science education as a field of research, what such research is for and what it can realistically hope to achieve. What sort of research domain is science education? It is not part of my brief this morning to address these questions but a few broad comments are appropriate.
First, it is possible to discern at least two, and perhaps three, different traditions within research in science education. The first might be called the pedagogic/ curriculum tradition. The primary focus here is the direct improvement of practice, i.e. of the teaching of science. Improved learning is assumed to follow from improved teaching and better curriculum materials, and the evidence for improved teaching is to be found in such issues as enhanced student enrolment, motivation, school attendance or achievement. Any theoretical underpinning is likely to be minimal and it is the practitioners, i.e. the science teachers themselves, who require and offer judgement about improvements in their practice. Such improvements cannot be transferred in some simple way to other classrooms, laboratories or teachers so that there is no simple transferable prescription for ‘best practice’. Ideas, however, can be shared and adapted. The work is close to the classroom, laboratory or lecture room and improvement in practice is incremental not radical. It is the kind of research or, if your prefer, curriculum development, that underpinned the science curriculum reforms of the 1960s and 1970s that offered science teachers advice, guidance and examples on the basis of materials tried out in classrooms and laboratories.
The second tradition in science education research might be called empirical/theoretical. Always more evident in the USA than in the UK or Europe, the approach here is more theoretically grounded and now includes both quantitative and qualitative work. The task is to generate ‘objective’ data that have relevance for, and can be applied to, the improvement of practice. We now know enough, not least from scholarly studies of the nature of technological knowledge and the public understanding of science, to recognise that this notion of ‘application’ is highly problematic. No one should be surprised that many science education researchers still find it difficult to enter the practitioner’s world and to develop strategies that are affective in enhancing students’ learning. It can be done as the Cognitive Acceleration in Science Education (CASE) Project in the UK has shown, but it requires special conditions, including training teachers to use the Project materials and supporting them in doing so.
While the differences between these two traditions should not be overdrawn, they are significant. They can be found in the journals in which the research is published, the institutional or departmental location and academic background of the researcher and the conferences that he or she attends. The differences are also much more marked in the USA than in Europe, although the particular position of the United Kingdom should be noted. To an English audience, any reference to didactics conjures up images of undesirable authoritarian teaching, far removed from the meaning didactics has within Europe. One result is that, compared with the UK, many more science education researchers in Europe work in close association with, or are located within, academic science departments.
Finally, before I turn my attention to curriculum developments in the United Kingdom, I want to emphasise that what you think research in science education is will determine what you think it can and should try to achieve. Is it about ‘improving practice’ in some direct way or is it about sharpening thinking, directing attention to important issues, clarifying problems, encouraging debate and thus deepening understanding? If it is primarily about improving practice, we need to acknowledge that the worlds of science education research, policy and of practice are different worlds, with different priorities and time-scales.