Cycles of Criticism and Response: An historical perspective on contemporary ODA/DFID evaluations of education projects and the lessons for the 'elimination' of poverty.

Dr Angela Jacklin and Prof. Colin Lacey

Paper presented at the British Educational Research Association Annual Conference, Cardiff University, September 7-10 2000

Abstract

By taking a largely historical perspective, this paper analyses some of the reasons for the failure of government policy to achieve basic stated aims such as the 'elimination of poverty' and 'universal primary education' (e.g. as in the White Paper, DFID, 1998). Firstly we look at the historical record of past education projects and the lessons learned by a major donor agency (DFID). We focus on the evaluation of education development projects because it is through achieving an understanding of failure and success that we would expect more successful interventions to emerge. However, instead of a virtuous spiral of improvement, we find cycles of criticism and response in which there is little evidence of substantial change within the agency in order to meet these criticisms.

In the second part of the paper we focus on one country (India) and one major project transition (APPEP to DPEP) to assess the lessons learned and the ability of an aid agency to learn from one major project and pass the lessons on to its successor.

We argue that until this process of 'understanding, learning and implementation' is established within major bureaucracies in LDCs and donor countries, there is little hope of achieving universal primary education, commensurate reductions in poverty and improvements in quality of life for the poorest section of our global community.

Introduction

The connection between education and the reduction of poverty has long been recognised. In the UK it provided much of the idealism and drive behind the education reforms of the 19th and 20th centuries. Educational standards remain a central plank in government policy for tackling areas of poverty and industrial decline.

In 1990 the Jomtien conference co-ordinated the efforts of LDCs and donor agencies in focusing attention on universal access to education, basic education and adult literacy. It set the year 2000 as a target for "universal access to and completion of primary education". Subsequent conferences and donor agency reports have reinforced this commitment (e.g. DFID, White Paper, 1998) but they have also been forced to admit to the failure of governments and aid agencies to make much progress towards achieving these basic goals. One of the functions of these conferences and reports has been to move the target dates ever further into the future. The latest conference in Dakar most recently moved targets on to 2015.

In this paper, we argue that until a process of 'understanding, learning and implementation' is established within major bureaucracies in LDCs and donor countries, there is little hope that conferences held in 2015 will celebrate universal primary education, commensurate reductions in poverty and improvements in quality of life for the poorest section of our global community.

The paper begins by considering the historical context of aid and evaluation in relation to education development projects, focusing in particular on developing understandings. Like Escobar (1995) we see development as historically and culturally specific, rather than simply an attempt to solve problems. Within this context, the paper then explores one major project and the influence and effects of its evaluations on the development and implementation of a subsequent and linked education project.

First steps in overseas aid

A significant period in the development of overseas aid came in the 1940s, when a series of international conferences[1] were held which aimed to stimulate international attention on world problems (Tickner, 1965). Out of these grew the specialised development agencies of the UN: the Food and Agriculture Organisation (FAO); the International Monetary Fund (IMF); the International Bank for Reconstruction and Development (IBRD); the World Health Organisation (WHO); and the United Nations Educational, Cultural and Scientific Organisation (UNESCO). In Britain, following a previous concentration almost exclusively on the colonies, the early post-war years saw a broadening of scope in the government's interest in education in developing countries and in 1946, the UK became a founder member of UNESCO.

Before the 1940s and the Colonial Development and Welfare (CD&W) Acts, the British government's financial contribution to education overseas was negligible and restricted to the giving of advice and formulation of policy (ODI, 1963). The CD&W Acts recognised that the eventual aim for the people of the colonies must be self-government, and also recognised the importance of education in this. In managing aid in post-British Empire days, different relationships began to develop with different countries, depending on their historical relationships with the UK, as they moved to independence in a radically changing world (Soper, 1967).

During the 1950s, the growth of education assistance overseas continued, although this tended to be on a fairly ad hoc basis (ODI, 1963). One of a series of Overseas Development Institute studies[2] (ODI, 1963), commissioned by the British government and financed largely by a grant from Nuffield Foundation, was designed to survey British aid for development at the time. The ODI pointed out that although government White Papers (in 1959 and 1961) identified an increasing awareness of the importance of education assistance in overseas aid, a large part of Britain's assistance for education did not even appear in the official aid figures. For instance they noted that even when an expenditure of several million pounds a year on various projects was involved, there was "...little evidence that they are regarded as forming a distinct whole even by the Government itself." (ODI, 1963, p 18). In relation to research and evaluation, the only mention was in relation to the lack of it.

Despite the large expenditure of government aid on overseas education, and the allocation of £21m to all kinds of research under CD&W Acts since 1946, almost nothing has been done in the field of educational research, either in the form of fundamental research, or in fact-finding surveys. In the last four years, of £51/2 m allocated to research under CD&W, a mere £5,351 was [allocated] for the only educational project - on educational wastage in Uganda - in spite of the existence of a Social Science Research Council. (ODI, 1963, p 43).

Following these early moves came four decades of development, arguably characterised by an almost exponential growth in interest in evaluation in the social sciences as well as in awareness of its potential. However, this has not been matched by similar changes in practice. This escalation has brought with it in more recent years, an increasing demand for evaluation which is purposeful, appropriate and relevant and from which lessons may be both learnt and utilised (e.g. Riddell, 1987).

According to Cracknell (1988) three distinct phases could be identified in relation to aid evaluation within the first three decades: early developments (up to 1979); an explosion of interest (1979-84); and a coming of age (from 1984 to 88). Rebien (1996) proposes a fourth phase, from 1988, which he terms 'aid evaluation at the crossroads'. In this paper, we argue that positive and purposeful movement from this crossroads requires a focus on learning to learn through evaluation, and on the development of ‘learning organisations’.

Four decades of development

Early Developments: the 60s and 70s

The organisational structure of the UK administration changed and developed during the 1960s and 70s. The new labour government (which came to power in October 1964) established a Ministry of Overseas Development (ODM) to work with the Foreign Office and Commonwealth Office. According to Cunningham the ODM, "...acquired over the years a reputation for administrative efficiency and professionalism which was recognised in parliament and elsewhere." (Cunningham, 1974, p 100). When the conservatives took over power in June 1970, they reviewed the structure of government and as part of ensuing changes, the ODM was transformed into part of the Foreign and Commonwealth Office (F&CO) and given the new title of Overseas Development Administration (ODA). The ODA was now semi-autonomous and situated within the F&CO, rather than working autonomously alongside it as previously.

In 1973, the ODI reported on what they called the findings of the 'First Development Decade' (referring to the 1960's). A key finding was a realisation that 'development' is a longer and more complex process than was previously thought (ODI, 1973). In addition, there had been a shift in thinking from an increasing general acceptance of the principle of British assistance to LDCs, to more concern with actually finding the best means of assisting development. This was an important development and the ODI also notably highlighted that "...pressure for a larger aid effort is no substitute for continuous and critical appraisal of what aid is doing." (p3). In reviewing the position at the start of what they termed the 'Second Development Decade' the key need for 'continuous and critical appraisal' was prominent (ODI, 1973).

At about the same time, more general critiques of aid itself and the practice of aid assistance began to emerge. For instance, Hayter (1971), a former researcher with the ODI, mounted a strong critique of the motivations behind aid, arguing that rather than being humanitarian they were more political, simply an extension of foreign policy and a means of maintaining control within developing countries [3] .

By the end of the 'Second Development Decade' and of Cracknell's (1988) 'early developments' stage, evaluation and its value in relation to development education was still receiving only limited attention. However, although references to evaluation (as opposed to appraisal) were limited in the UK there was a notable difference discernable in the US system. There, the Agency for International Development (USAID) had had more concern with performance criteria and evaluation was gradually becoming a more established practice (Rebien, 1996). Differences could perhaps be partly explained by more general developments in the evaluation movement in the US at the time.

In Europe meanwhile, according to Rebien (1996), in response to the perceived disorganisation of aid and evaluation in the 1970s, the Development Assistance Committee (DAC) of the OECD set up an Expert Group on Aid Evaluation (established in 1983). Thus, as we entered the 'Third Development Decade', evaluation was clearly, gradually beginning to receive more attention in the UK (Soper, 1967; Cunningham, 1974).

A Decade of Two Halves?: the 1980s

According to Cracknell who was writing in the latter part of the 1980s, two phases could be distinguished in the decade. The first (from 1979-84) was characterised by an 'explosion of interest' in evaluation and the second phase (from 1984-88) was described as its 'coming of age' (Cracknell, 1988). The DAC Expert Group on Aid Evaluation described the decade in a similar manner, distinguishing the period from 1982-85 as a time of 'new mandate, new challenges' from the latter part of the decade (1985-91) described as a time of 'forging ahead' (Rebien, 1996).

In the UK during the early 1980s, Cracknell (1984) described how the volume of evaluation work increased substantially. In response, some reorganisation occurred within the ODA, and the Evaluation Unit was upgraded and given departmental status, although it remained a small and relatively insignificant part of the whole. The phase culminated with the first ever conference on evaluation (in 1984), organised by the ODA. The conference, which was welcomed and timely, aimed to provide a forum for the ODA to present its evaluation programme and share some of its experiences, as well as to receive feedback and contributions from a wider group of individuals (Cracknell, 1984). A notable concern expressed was for the evident lack of learning. The point was made that the same conclusions reached by evaluation (of training) some 20 years previously were still reemerging. This White argued, "...pointed once more to the problem of applying conclusions of evaluations in a systematic institutional way." (a representative from the DAC Expert Group on Evaluation, quoted in Cracknell, 1984, p80).

As the theory of development gradually recognised the importance of evaluation, an increasing concern with practice emerged. This was not only evident during the conference, with the relationship between monitoring and evaluation as well as the need to consider the scope of evaluations emerging as two key areas for development, but was also beginning to appear in the more general literature. McNeill (1981) for instance commented that evaluation '...has long been recognised as one of the least satisfactory aspects. Neither donor nor recipient has an interest in criticising its own projects, and drawing attention to its failures.' (p 24). He argued that because the expectations of projects were almost invariably too high, this posed an added problem as projects could not fulfil all their expectations. In addition, with the almost inevitable contrast between performance with aid and performance without it, he also argued that there was clearly a need to focus on what happened to the project after input from donors has ceased. However, as McNeill pointed out, all these issues can '...act as a further disincentive to the recipient country to undertake a critical evaluation' (p25).

In relation to the more established evaluation practices of, for instance USAID and the World Bank, critiques began to focus on the use of feedback. For instance, Gran (1983) in an analysis of evaluation policy and practice, critiqued evaluation for not providing feedback at a systems level which he argued was required for change to occur, '...development evaluation, as historically practised by major donor agencies, does not provide the systems feedback that would result in major systemic change or improvement.' (Gran, 1983, p 291).

Rather than a period of 'coming of age' as Cracknell saw it at the time, Rebien writing in 1996, argued that the sequel to this period of 'exploding interest' was perhaps better described as a time of increasing professionalism in relation to evaluation. He saw the latter part of the 1980s as a time when attempts were made to focus on the carrying out of evaluations as well as to consider longer term impacts. Two major studies were published by Cassan et al, (1986) and Riddell (1987) which proved to be extremely influential, both at the time and later. 'These two studies formed very important milestones in the continuing development of the role of evaluation; both are widely quoted, and still stand today as the most comprehensive studies of long-term aid effects.' (Rebiens, 1996, p49).

Cassen's (1986) classic study raised the seemingly simple question 'Does aid work?' and examined evidence for and against. In responding 'mostly yes, however…', he drew attention to the inadequacies of evaluation, highlighting how it often does not give all the information needed, as well as there existing a failure of agencies to learn from mistakes made. Riddell (1987) meanwhile, examined the impact of aid at micro level and methods used to evaluate the effectiveness of aid, before moving to consider project and country-specific evidence which was available at the time. On the basis of this, he considered what evidence was really available for making judgements about aid effectiveness, concluding that there was actually insufficient evidence to make claims for either side. Instead he argued for a consideration of all variables that could effect the outcomes of evaluation, 'To date, aid evaluation has been a very blunt and inadequate tool....[which]....simply fails to capture all the variables that in the real world do have an effect upon outcome.' (p 184)

These themes, argued influentially by Cassen and Riddell, were echoed by others during the latter part of the decade. Moseley (1987) for instance, discussed the effectiveness of aid within a context of the way in which the politics of aid operate. He highlighted yet again the poor quality of information available via evaluations, partly he felt because many reports '…simply do not ask the questions which have to be answered.' (p 236). He also argued that the allocation of aid tended not to be related to measured aid effectiveness but determined by historical commitments and special difficulties (such as natural disasters). Stronger critiques of the politics of aid also emerged in the late 1980s (for instance, Hancock's (1989) substantial critique of the large multi-lateral and bilateral aid agencies).

Further tensions were also highlighted, for instance that evaluation was generally seen as a 'marking out of ten' which no adult would like '…particularly if their future career is going to be influenced by the mark.' (Mosley, 1987, p54). Arguing that this in itself creates an opposition of interest between evaluators and the evaluated, Mosely went on to describe tensions between groups of employees within the bureaucracy of the ODA . He discussed potential conflicts of interest between on the one hand, the evaluation unit and on the other, technical advisers, overseas project staff and administrators. Conflicts he argued which tended to centre on:

'…not so much whether evaluation is done, but how it is done: the evaluators will lay emphasis on methodological rigour, the computation of statistical measures of project performance and wide diffusion of results, whereas the evaluated will be more interested in quick feedback to management, with academic rigour, computation of numbers such as ex post rates of return and wide diffusion of results at a discount. (Mosley, 1987, p 54).

So was this a decade of two halves as it was perceived to be at the time? Alternative perspectives have been offered since. Clearly there was a growing recognition of the importance of evaluation but it was questionable whether this translated into practice with questions raised over both the quantity and quality of evaluations actually carried out. Rebien (1996) saw the decade as a time when reality and rhetoric did not match: when firstly funding allocated to evaluation did not equate with the emphasis and importance apparently being accorded it, and secondly that efforts to develop theories, methods and methodologies of aid evaluation did not match the apparent recognition of the necessity and importance of so doing (Rebien, 1996, p50-1). Rebien's analysis leads him to the belief that by the end of the decade aid was at a crossroads whereby agencies had acknowledged their evaluation needs but now needed to allocate adequate resources as well as develop appropriate methodologies to address those needs.

Verspoor (1993) takes a different perspective but comes to a similar conclusion, focusing instead on the effects of these developments. He describes the decade as a time when education development stagnated and sees a key outcome as a need to 'share lessons from experience'.