The management of pupil performance data in schools
A comparative analysis of the management of pupil performance data in schools in the United States and Wales
Rose Hegan
(University of Glamorgan)
Paper presented at the British Educational Research Association Annual Conference, University of Glamorgan, 14-17 September 2005
Working Paper – please do not quote without permission
Address for correspondence:
E-Mail:
Introduction
The effective management of pupil performance data is increasingly recognised as an importantaspect of organisational knowledge for schools in the UK and elsewhere. Pupil performance data is a keyelement in ensuring effective pupil learning and in improving pupil learning. It is not a new observation that schools throughout the world use assessment to gather pupil performance data, however, some schools make less than effective use of available data whilst others utilise it effectively to improve pupil attainment (Assessment Reform Group, 2003). In a number of schools, data is accumulated in excessive quantities which can result in an overwhelming array of complicated and often conflicting information (DfES, 2002). This excess of data may result in school leadership teams being unable to implement strategies for utilising the data effectively to improve pupil performance, and in the classroom, some teachers have become accustomed to relying on standardised test results limiting the use of their professional judgement (DfES, 2004).
The purpose of this paper is to describe research I have recently completed inWales and the United States. I undertook an international comparative analysis of the management of pupil performance data within public high schools.The paper describes how secondary schools use assessment to gather pupil performance data, and how they utilise that data to improve pupil attainment. I will outline the background and context of the study which involved intra-national and inter-national cross case analysisand conclude by identifying challenges faced within schools, highlighting the need to develop further understandings of current practice.I developed an interest in how schools collect and use pupil performance dataduring my work as an Assistant Head of a secondary comprehensive school in South Wales. Whilst working closely with educational colleagues in my own and neighbouring schools and local education authorities, I experienced first hand the limited knowledge and understanding of current assessment practice.
Background and context
Assessment
Assessment focuses on measurable areas of knowledge, skills and understanding as the systematic basis for making inferences about the learning and development of pupils (Dary Erwin, 1991). It should in addition to measuring progress, inform pupils of their learning, providing feedback to teachers and learners on how they can improve (Crooks, 1988).
There are many purposes for assessment, including but not limited to policymakers setting standards and targets, as well as monitoring the quality of education. In addition, teachers make use of assessment to make judgments on pupil strengths and weaknesses, deriving information from a variety of sources, generated daily as an integral part of teaching and learning. It is how teachers gain knowledge of their pupils needs, achievements and abilities. Effective teaching depends on an understanding of how learners use their existing knowledge and develop newawareness. Assessment similarly depends on being able to access learner’s knowledge and provide them with means to demonstrate it. The primary principle of assessment is therefore, to provide feedback in order to develop and inform teaching and learning practice in which both teachers and pupils are engaged (Meyerson and Martin,1991).
Societies with mass education systems use educational assessment procedures that are determined by our beliefs about learning (Broadfoot, 1999). Learning theory has exerted influence on the manner in which policy makers and teachers decide upon teaching, assessment practice and the way to enhance pupils’ conceptual development through programmes of study.
Assessment has the potential to improve learning for all pupils and the influence of testing on curriculum and instruction is widely acknowledged. In early curriculum design, there was an emphasis on teaching and learning that could be assessed objectively using quantitative methods (Weber, 1952). Less attention was focused on the role of the social context of learning in shaping higher-order cognitive abilities that real-life circumstances often require in problem-solving situations. In my experience as an educator enabling pupils to learn isolated facts and skills through structured drill and practice on discrete factual knowledge may allow ease of quantitative assessment. Yet it is more difficult for pupils to access their knowledge without having learnt the ability to organise and recall information in problem-solving situations. It is argued that measuring this ability is a more appropriate target of assessment, and as such educators and policymakers are turning to alternative assessment methods as a tool for educational reform (Assessment Reform Group, 2003).
Assessment in the United States and the United Kingdom
The overall purpose of assessment has developed since the 1980s, with the growth in the use of assessment to support learning as well as to measure it. It is now generally accepted that by having relevant access to data on the performance of pupils, teachers are better able to work with colleagues to focus on raising standards of teaching and learning and that ‘effective schools’ make good use of frequent monitoring of pupil performance (Gardner, 1999). Whilst summative testing with an accountability purpose remains central in the UKassessment system, a change of focus has placed formative assessment or assessment for learning on the education agenda.
In the US, there is a contrasting move to commit more resources to diagnostic assessments that can be accessed incrementally (Tetreault, 2002).The standards movement has increased the production of pupil performance data through the regularity of proficiency tests in an attempt to identify and help low-performing pupils and schools are labelled if they do not succeed in increasing pupils scores (US Department of Education, 2002). The logic behind these assessment systems, one of the centrepieces of the push for standards-based school improvement, has been to find a more accurate way to measure both pupil and school progress. Schools may then be held accountable for their results (US Department for Education, 2002).
How is assessment undertaken in schools?
Schools in the US and UK generate pupil performance data from mandatory national standardised tests and from internal assessment procedures.Schools may also administer aptitude and predictive testing to prepare pupils for the practical process of external exams in addition to assessing current strengths and weaknesses.
Assessment for accountability has been established in the UK for some years and has recently become an integral part of the US educational process through the introduction of No Child Left Behind (NCLB) (US Department of Education, 2002). In both the UK and the US, school leaders and governors are made accountable for their results by the Local Education Authority (LEA) or State Education Boards after agreeing to specific targets for expected pupil outcomes. Currently in Walesresults from standardised external tests are used to set benchmarks and targets against which the effectiveness of teachers and schools are evaluated, and a specified percentage of pupils must demonstrate proficiency in core subjects (Le Metais, 1999). Similarly in the US standardised testing is related to sanctions through the NCLB ‘adequate yearly progress’ (AYP) formula. Under AYP schools have clearly defined goals to ensure that they are on target for teaching pupils to state standards. Failure to reach an agreed improvement in pupil performance results in a series of sanctions and improvement measures (US Department of Education, 2002).
Knowledge management
Schools may think they manage pupil assessment data well but the reality canbe different. Pupil performance data is important organisational knowledge for schools and needs to be managed in the same way that knowledge is managed in many other work organisations. Choo (1998) establishes a framework for knowledge management identifying a range of activities that can be useful in analysing the management of pupil performance data. Schools that use data management in organisational sense making, knowledge creation, decision making and organisational action are able to be more effective by utilising pupil performance data through efficient systems and processes.Furthermore, by engaging in continuous organisational learning through the expertise of its staff, schools can make adjustments to their internal management and use of data in order to impact upon and improve pupil outcomes.Choos (1998) framework has been central to how I have framed the way schools manage assessment data.
The Research study
Research Design
I conducted a comparative international case study investigation of schools in different settings using a qualitative method to explore the management of pupil performance data. I compared my results using an intra-national and inter-national cross case analysis (Glaser and Strauss, 1967). By examining the processes that the schools use I have been able to identify distinctions between both the countries and the schools themselves
The study was conducted in six state funded high schools, in the US and Wales, in a range of geographical and socio-economiclocations. The studies aim was to establish, analyse, compare and contrast current practices in the management of pupil performance data. The main research questions were as follows;
- What data do schools gather on pupil performance?
- How and why is the data produced?
- How and why is pupil performance data analysed?
- What analysed data do the schools use and what do they use it for?How can the management of pupil performance data be modelled?
- What are the tensions, dilemmas, and paradoxes experienced by schools in their management of pupil performance data?
Personnel from different curriculum areas and with a range of managerial responsibilities were interviewed and an assortment of school documentation and public records were collected and analysed. An analytical framework for data management was used for each case to map the data management strategies and systems (Dunning and James, 2002).
Criteria used for school selection
In selecting schools for the case studyI applied opportunity sampling, matching schools in each country for a comparative inter-national cross case analysis. In finalising my criteria I focused on the location of the school, its size and the type of community it serviced. The criteria that I set were as follows:
- Each school must be from a different LEA in Wales and differentSchool Districts in the US.
- There must be a range of school size in terms of pupil population and each school must be comparable to one school of similar size in the other country.
- There must be a range of socio economic and demographic areas.
- Each school must comply with LEA or State controlled School Districts and abide by statutory regulations.
Overview of findings
Each of the schools varied in the range of pupil performance data produced, how it was collected, analysed and used.In the US, states make discrete decisions on academic content standards and associated standardised achievement tests aligning with federal requirements. Consequently,US schools administer different tests according to where they are located. Whereas, in Wales each school administers the same government directed end of key stage assessments.
Intra national cross case analysis – United States
In the US there was a significant difference between the quantities of data produced in each school and in how it was used. School 1 and 2 in Ohio had made significant changes to their assessment systems in light of NCLB. They were in the process of phasing in additional state statutory tests, as well as introducing further testing and assessment processes to prepare pupils for the new testing schedules.The two schools were meeting the challenges of NCLB and looking towards new ways of adapting to the changes. In contrast school 3, had made few amendments to their existing testingprogramme, in part as New York States testing procedures had primarily been in line with federal statutory tests. However, the school did not consider it necessary to introduce additional analysis of data or supplementary testing to prepare pupils for the more challenging demands of NCLB and the AYP process.
Further differences existed in the administration, management and use made of internally generated assessment data.Although each of the three schools produced daily internal teacher assessments in similar formats,school 1 and 2 had introduced online mark book systems. The online systems enabled detailed tracking of pupil attainment and facilitated the analysis and dissemination of data. Pupils, parents and teachers had daily access to grades, lesson notes, homework and course content. Staff were able to analyse data and quickly respond to underachievement, however, many staff were still adjusting to the system and some relied on parental intervention for those pupils whose grades caused concern. Conversely, school 3 undertook little analysis of internal teacher assessments, collecting data four times a year at the end of each quarter. The school was unable to identifypupil performance data throughout the semester and intervention was often provided too late or not at all.
Intra national cross case analysis – Wales
In Wales there was a greater level of consistency between the schools production and acquisition of pupil performance data. Each school undertook a range of statutory and supplementary tests. All three schools were led by LEA requirements for much of their data production but varied in the access that staff had to data. School 5 and 6 made extensive use of teacher assessments and had introduced methods of data analysis which were unique to their own schools. Staff analysed pupil performance data in detail and had implemented comprehensive mentoring programmes for pupils in order to improve pupil learning and raise achievement. School 6 applied this process to key stage 4 whilst school 5 had made significant progress in both key stage 3 and 4. School 4 relied less on teacher assessments and focused more on externally provided data.In addition, there were considerable differences in how the schools disseminated data to staff. In school 6 staff had particular issues with accessibility to computer hardware and lacked knowledge in using computer systems. A deputy head from school 6 outlined problems with their computer facilities and capabilities;
“It has gone very slowly with getting hardware around the school, and when we got data on the network it was not of sufficient capacity to support it”.
Within each school in Wales analysis of data was undertaken by a range of staff but predominately by year tutors and senior managers.
Inter national cross case analysis
Differences were apparent between the US and UK testing philosophies. The testing undertaken in the US schools favoured multiple-choice examinations, with far fewer constructive response examinations which are more common in the UK.
Schools in Wales generated larger quantities of test data than in the US. All of the schools produced a range of internal teacher based assessments.
The level of access tocomputers and network facilities varied considerably between the US and Wales. The ease of access in school 1 and 2 in the US to internal teacher assessment data through online systems, contrasted dramatically with the other schools where parents were informed of pupil grades on an infrequent basis. In Wales the schools lacked computer hardware to utilise data effectively and showed signs of being inundated with complex and sometimes inconsistent pupil performance data, as expressed by the Head teacher in school 6;
“The difficulty is that we have all this data and it is being used for so many different purposes and it is trying to disentangle it all the time but it is all in there”.
However, although the computer systems in the USwere more sophisticated and used well in some cases, staff had resisted in accepting the new technology. Training in the use of new systems and computer software had at times been inconsistent and school 3 rarely took advantage of the data management systems that were available. Moreover, although school 2 were in a transitional phase of improving their management of data they had experienced considerable difficulties. Many staff were suspicious of managements motives for increasing the use of information technology and feared change, one teacher commenting:
‘The principal uses the system to identify weak spots. For example if one math teacher had a group that was underperforming in comparison to a similar class this would be identified. The system is used to identify failure’
Both in the US and Wales analysis of data was principally undertaken by guidance councillors and heads of year respectively. In most of the schools the drive for change in the management of pupil performance data came from these members of staff. School 3 in the US was the exception, producing, analysing and using significantly less data than any of the other schools. Teachers were able to record as many or as few grades as they choose and there were no internal or external checks on data records. In contrast teachers in school 1 in the US were required to record grades online within seven days of setting tasks. In Walesteaching staff in each of the schools were afforded flexibility on how frequently they recorded grades which was largely determined by internal quality assurance policies. In Wales each school used analysed data to support pupils through a range of mentoring programmes, study support and structured individual target setting. In contrast, the US schools did not provide any similar type of programme, focusing more on intervention for pupils who failed to achieve specified levels in state tests.