Developing and Evaluating Early Excellence Centres in the UK: Some Difficulties, Dilemmas and Victories in promoting Integrated and Joined Up Services

Anne Campbell

Institute of Education

Manchester Metropolitan University

Paper presented to the British Educational Research Association Annual Conference, University of Leeds, 13-15 September 2001, as part of the 'Early Years' symposium

Abstract

The paper, drawing on work with three Centres of Excellence in the north of England, addresses issues arising from a government initiative in funding models of integrated services and good educare practice. Issues concerned with evaluation work and the difficulties in investigating cost effectiveness in a multi-level, large and complex evaluation programme will be discussed with reference to recent research such as Simons (1996) and Pascal et al (1999). The dilemmas of practitioners living with shifting professional identities will be discussed in relation to the development of ‘new’ professionals working to an educare agenda, Anning and Edwards (1999), Thompson and Calder, (1998).

Ventures between external and internal evaluators involving collaborative action research will be explored and related to issues of empowerment and the development of participatory methodologies. The voices of practitioners engaged in ‘victory narratives’ heard through case studies will illustrate the vitality and reality of the challenges facing those involved in innovation in the Centres.

Evaluation Issues

Following the White Paper ‘Excellence in Schools’ (DfEE, 1997) the government announced the launch of a pilot programme of Early Excellence Centres, (EECs) to develop and promote models of high quality, integrated early years services for young children and families. There are currently 35 such centres in England. Through the Centres, government objectives of supporting vulnerable families and addressing child poverty through intervention of ‘joined up services ’ are being trialled. The notion of a ‘one stop shop’ for families in need has been at the heart of the initiative. The government has been strongly influenced by the Head Start evidence of the long-term impact of intervention programmes (Lazar and Dartington, 1982).

The EEC initiative is subject to a national evaluation (Pascal et al, 1999) promoting a strategy which operates a multi-level approach involving, ’validated’ self evaluation, local evaluation and national meta evaluation. The first level rests on self evaluation processes within the EEC and is mostly carried out by the practitioners themselves. The second level is provided by the local evaluator, who co-ordinates and leads the collection of data within the EEC and validates it. The third level is provided by the national evaluators, who agree local, annual evaluation plans and meta evaluate the evidence from the whole of the EEC programme. The principles underpinning the evaluation espouse a participatory and collaborative methodology, which aims to ‘be empowering and developmental for all participants and to respect and protect them’, Bertram and Pascal, (2001:16). A claim, as discussed later in this paper, which would appear to be struggling to survive, as the pressure from the government for ‘hard’ evidence of cost savings and cost effectiveness of the programme increases, year by year.

This paper draws mainly on aspects relating to the local evaluation strategy and centres’ self evaluation during the period 1999-2001 and the first national report of the project, Bertram and Pascal, (2001). The national evaluation framework consists of 22 core and common indicators and 53 sub-indicators, from which local evaluators and centres cover the core indicators and select those which relate to current developments. The three centres featured in this paper grouped their indicators around the following themes:

·  Management and leadership

·  Costings and Finance

·  Children, Families and Community

·  Practitioners and the wider professional context

During the period 1999-2001, six staff from the Institute of Education, MMU, worked closely with the three centres, negotiating roles, promoting and supporting self evaluation and practitioner–led initiatives in areas which centres have identified for development.

There are, however, tensions emerging in the evaluation work between the qualitative and quantitative thrusts of the evaluation strategy. It is perceived, by some practitioners and local evaluators, that the amount of time allocated to the collection of user and client statistics and to the cost savings agenda is greatly in excess as to what was originally expected of the evaluation strategy. Problems have arisen in the use of software specifically designed for the evaluation project, in that it needs to be more flexible to support the varying contexts of the centres. Definitions of terms for inputting data have been problematic and have resulted in Centres having to review the type of records kept and to build these requirements into the daily and weekly record keeping schedules. Bertram and Pascal (2001:81) report ‘six key areas of development which were highlighted by the EECs as having improved directly as a result of the first year of evaluation:

·  Service quality;

·  Integration of staff teams;

·  Integration of management structures;

·  Data and information management systems;

·  Financing systems;

·  Relationships with Local authority ,EYDCP and other agencies’

They claim these provide ‘a convincing case for the added value of the validated self evaluation methodologies adopted in the EEC National Evaluation’, but they do also acknowledge, as discussed above, the problems ‘on the ground’ as some EECs have struggled to meet the demands of the evaluation. This may well be better represented in the next, forthcoming, annual report which will report on a larger number of EECs, in particular those who have been established more recently and would categorise themselves as at the ‘foundation’ or beginning stage.

Indeed, comments would indicate that the balance of quantitative and qualitative data has changed considerably since the project began in 1999. There is, amongst practitioners and evaluators, an acceptance that evidence of impact, outcomes and cost issues have to be part of the accountability and evaluation agenda but the challenge facing the centres is how to manage the time consuming statistical data collection and still have time to engage in developmental action research and evaluation which fuels the innovative life of the centre and enables professional development and renewal. This aspect of the evaluation is seen by the centres to be important in developing good practice for dissemination to other providers in educare.

The centres themselves are diverse in their nature and provision and there are important questions to be asked around comparability of statistics from such different and complex centres. One strategy, that of asking centres to identify which model of integration, Unified, Co-ordinated, or Coalition, they operate (Bertram and Pascal 2001:24), indicates the diversity of centres and the complexity of national comparisons when contexts are so varied. Indeed there has been a great deal of anxiety around the unit costing of services due to contextual differences and needs.

Similarly, there are fears that while the evaluation methodology accepts that there are different complex models of integration, such as those categorised above, there would appear to be some evidence that OFSTED inspections may be working to one model of integration, that of the unified model. This causes some concern for those Centres who, due to contextual factors such as multi-site working, network arrangements and inherited local situations may be categorised as unified or co-ordinated models of integration. This concern is recognised in the national report, Bertram and Pascal (2001:70) as ‘fears of the transformative process’ and ‘the challenges of developing integrated provision’ which require joint staff training, consultancy on team building and staff development time to ensure progress towards integration. This concern poses particular challenges for reporting on the overall progress of EECs. How can the different pace of change and progress of diverse and varying contexts be reflected in the national picture? Arguably, the use of case studies, which could illustrate the ways in which Centres develop, would be one way of reflecting the diversity of Centres.

Case studies form a substantial part of the evaluation, but these have been progressively focussed and tailored to meet the cost saving and effectiveness agenda, in that case studies are used to illustrate how use of the services of the centre could result in savings for social and medical services, Bertram and Pascal, (2001). Many of these cost saving case study scenarios are speculative, and it could be argued, focus on ‘problem cases’ rather than a variety of types of users, which are representative of the range of people and families using the centre. It could also be argued that political capital is being sought through the resolution of examples of families and their problems. It is difficult to hypothesise about what could have happened to a particular family in crisis and to ‘prove’ that the service provided by the centre caused the problem to be solved. One example used by Bertram and Pascal, (2001:63) cites a ‘cost saving ratio’ of 1:11: the estimated provision might have included foster care for 3 children; admission to psychiatric ward; community psychiatric care; drugs; referral to educational psychologist; play scheme costs for 3 children. It is tempting to accept the speculative figures, as many centres are doing such valuable frontline work for families, but is it necessary to presume the worst scenarios in order to demonstrate ‘cost savings ratios’?

Whilst Simons (1996: 226), on case study advocates “new ways of seeing” she is mindful of the pressure on evaluators to

“ service political needs ..by using case study as a technical method not as a social process leading to a social product”

Many participants in the evaluation of EECs, whilst finding themselves wishing to seek understanding of their innovative practices in order to further develop also find themselves being overwhelmed by the search for certainty and the need to demonstrate cost effectives and cost savings. Simons ( 1996: 230) warns that,

“ The search for certainty, comparison and conclusiveness tends to drive out alternative methods of seeing…… evaluators need to embrace paradoxes inherent in people events and sites”

Collaborative and participatory methodologies

In a previous paper, Campbell and Jones, (2000), argued that participatory research methodologies were very appropriate to the evaluation strategy for EECs, despite their stated ‘delicate tightrope to walk, balancing internal, local and national agendas within a large arena’. This tightrope is even more precarious in the current climate, where the demands of the national agenda are increasing and the requirement to collect statistical data falls mainly on centre staff, and it could be argued has worked against the full participation of external evaluators.

Nevertheless, attempts to further develop participatory methodologies and to promote the dual perspectives of ‘insider and outsider’ stakeholders within the evaluation have continued and resulted in qualitative methods being used and developed in collaboration between local, external evaluators and internal, designated staff across the three centres concerned with this study. Close links have been maintained between the three Centres in this study, enabling:

·  Sharing of expertise

·  Joint training in evaluation methods and analysis

·  Quality assurance of processes by peer scrutiny

·  A ‘sounding board’ for new ideas

·  The development of collegial relationships between senior management in particular, which are supportive and helpful

·  Negotiation of focus for the evaluation in each centre, whilst being individually specific could also be common or linked to another centre’s agenda

·  Joint planning and support for the writing up of data and of writing the annual report

In these ways, stated above, it was hoped to ‘co-generate research agendas and show respect for the intelligence and goals of the people undergoing change’, Garaway, (1995). Authentic participation in the evaluation process was identified by Tandon, (1988:13) as giving participants :

‘a role in setting the agenda of the enquiry; participation in the data

collection and analysis; control over the use of outcomes and the

whole process.’

Whilst aspiring to these goals, some degree of success was demonstrated at local level as the team of ‘insider’ and ‘outsider’ evaluators worked together and implemented the planned evaluation. As indicated above this has become more difficult due to an increasingly imposed agenda from the national team and the DfES. Despite this pressure each centre has had designated ‘evaluation link staff’ working on themes identified by the practitioners themselves, using action research and collaborative working approaches and involving parents, carers, and visiting professionals as well as the centre based staff. The designation of staff with a responsibility for collecting and analysing qualitative data has been a successful strategy in increasing responsiveness and authenticity within the evaluation. A methodology which promotes dialogue and partnership between participants was sought by the teams in order to subscribe to the democratic principles of participatory evaluation, and ‘an ongoing dialogue with primary stakeholders’, Stake, (1976), and to adhere to what McTaggart, (1997) regards as ‘authentic participation’ in conceptualising and practising research and evaluation. We aspired to emulate what Cousins and Earl, (1992:399) defined as participatory evaluation

“applied social research that involves a partnership between trained evaluation personnel and practice-based decision makers, organisation members with program responsibility or people with a vital interest in the program”

Accordingly, the team set out to empower participants, increase responsiveness to the centres’ concerns by choosing methods which could be used by the participants themselves. Methods employed in the evaluation were:

·  Informal interviews and observations

·  Mapping provision

·  Action planning and target setting in areas selected for development.

·  Documentary analysis

·  Case study, narrative accounts and testimony

·  Diary keeping and note taking

·  Cost analysis

·  Compilation of statistical data relating to users/clients and use of building, provision of services

Throughout the evaluation regular meetings and discussions between internal designated staff and external evaluators were held in order to negotiate the conduct of the evaluation and be mindful of the fact that participatory research and evaluation requires practitioners to be not only involved but to have ownership and control of the agendas for action and for future developments. In tune with Hart et al (1994:213) we sought to enable practitioners to set the agenda and to promote research and evaluation that would be capable of being taken over by participants in their own interests.