Alis Oancea, University of Oxford

Alis Oancea, University of Oxford

Dr Alis Oancea, University of Oxford

Procrustes or Proteus? Towards a philosophical dimension of research assessment

Paper presented at the British Educational Research Association Annual Conference, University of Warwick, 6-9 September 2006

This paper[1] intends to raise questions about public assessments of the quality of research in the United Kingdom, using the case of applied and practice-related education research as an illustration. This is work-in-progress, submitted to collegial discussion at the BERA conference in September 2006[2].The paper argues that the current mode of research assessment is drifting towards a narrowly technicist view of applied research in education, in line with recent research policy trends in the UK and the US, but not with the diverse nature and specific contributions of research itself. It begins to counter this view through an exercise in thinking, from an Aristotelian perspective, about the mismatch between the construction of applied research by the education community and a particular view that seems to underpin current technologies (criteria included) of research assessment.

1. The Procrustean drift

The past 20 years have seen growing emphasis on quality in relation to governance, public services, and consumer goods and services. In recent years, the discourse of quality has also permeated the field of research, and education research, to the extent that it almost replaced in public consciousness earlier debates about, for example, ‘scientificity’. Tighter accountability regimes, in Britain and elsewhere, and the scarce resources, available for public research in the late 1980s, coincided with an increase in the policy interest for research assessment, culminating with the creation of the Research Assessment Exercise in 1986. An unplanned consequence of this was the pressure for more articulated definitions of quality in research, likely (it was hoped) to attract widespread consensus and to justify funding decisions.

Following the 1993 White Paper on science and innovation and its emphasis on competitiveness and wealth creation, the main criterion by which to judge research publicly outside academia became its ‘relevance’ (Cabinet Office, 1993). The expansion of the evidence-base and the “what works” movements in the US and the UK made relevance even more central to public understandings of research quality, together with the more traditional criteria of, for example, methodological soundness (epitomised, in the US health-inspired perspective, by randomised controlled trials and their systematic reviewing). In the case of educational research, this happened on the background of long-lasting controversies about the nature of inquiry and of knowledge in the field, and of some already heated disputes about its relevance, quality and impact in the late 1990s (see for example Hargreaves, 1996; Tooley and Darby, 1998; and Hillage et al, 1998, analysed in Oancea, 2005; Pring, 2000).

The interpretation of both “quality” and “relevance” in public assessments of research, however, generated in many circles increased concerns about the treatment of certain types of research. Somewhat paradoxically, applied research often found itself among those singled out as under threat: what should have been an era favourable to all applied research, it was felt, turned out to encourage predominantly its more instrumental strands. Relevance remained thus an underdeveloped criterion, awkwardly close to “the eye of the beholder”, and worryingly prone to political inclinations. As many have argued, the Research Assessment Exercise (and the tight regime of accountability that accompanied it), despite ostensibly supporting applied research through the recommendation that it was assessed “fairly and against appropriate criteria” (UK Funding Bodies, 2004, para 47), may have contributed to the reinforcement, rather than the solution, of these problems (see e.g. the criticisms of the RAE expressed in the Roberts and the Lambert reports of 2003 – UK Funding Bodies, 2003; HM Treasury, 2003)[3].

The problem resided, in part, with a narrowing of the “official” concept of quality, which, for lack of a better definition, tended to hover somewhere in the space between scientificity (often defined by reference to the natural sciences), impact (reduced to, for example, observable and attributable improvement in practice), and even productivity (whereby volume was mistaken for an indicator of quality). The plethora of standards, shopping-list criteria, and “cut-off point” guidelines currently in circulation illustrates this sufficiently.

Further, the concept of application itself was restricted to linear implications of theoretical work, and to “technical solutions” to practice-defined problems (“what works”). Finally, assessment was generally defined by its use, as a means towards pre-determined ends (distribution of funding and holding to account[4]), and to a large extent ended up subordinating appraisal to measurement, and process to outcome – i.e., falling into the extremes of what Stake (2001) calls “criterial thinking”. The risk entailed by this is that of excessive instrumentalisation of assessment, and of the concept of quality with which it operates, and thus their alienation from interpretations of good practice shared within professional communities. In such context, the assessment of applied and practice-based research is likely to drift towards simplistic concepts of linear application and technical solutions. This is what I shall call the “Procrustean” drift in public research assessment in Britain.

The story of Procrustes, though quite gruesome, is one that is worth a brief retelling. Procrustes (“The stretcher”) was mentioned in the myth of Theseus as a robber living in Attica, who enticed passers by with deceptive hospitality and, after having them lay down to rest on his iron bed, would make them “fit” the length of the bed by either “stretching” them or chopping off their limbs[5]. It is the contrast between the rhetoric of accommodation and the outcome of arbitrary, forceful conformity that may make the analogy to Procrustes’ bed appropriate to the above-mentioned narrowing of “quality” and “application” to fit the requirements of a technically-defined mode of research assessment. In other words, while public discourse insists on “ensuring that appropriate measures of excellence are developed which are sufficiently wide as to capture all types of research, including practice-based research, applied research, basic/strategic research, interdisciplinary research” (UK Funding Bodies, 2005, Para 3c), many critics have commented on what they perceived as “undue emphasis” in e.g. the RAE “on academic publications rather than applied work”[6], and a “bias against multi-disciplinary research in favour of theoretical and against applied research”[7]. It is not the actual acts of evaluation that members of the RAE panels undertake that are of concern here; rather, the concern is with the overall framework of the official discourse that constrains and interprets their work.

This paper will suggest one way of reframing the problem, by interpreting ‘application’ as a complex entanglement of research and practice, ‘assessment’ as deliberation and judgement, and ‘quality’ as excellence or virtue, in a classical (Aristotelian) sense of the terms. The remainder of this paper will clarify what I mean by these terms, with the exception of detailed elaboration of the concept of assessment. First, it will expose some of the limitations of current “official” concepts of applied research; second, it will outline a threefold understanding of excellence in applied and practice-based education research; and third, it will examine potential tensions and points of contact between the three domains of excellence proposed.

2. Constructions of “applied” in “applied research”

There are many competing, though overlapping, views about the specific modes of research to be included under the category of applied research; some of the more powerful interpretations in the public policy domain in recent years being those of the OECD (2002a) and of Stokes (1997). These interpretations have also been strongly urged in the field of education, most recently by the OECD in their review of educational research capacity in England (OECD 2002b) and by the English National Educational Research Forum (Feuer and Smith 2004).

The OECD Frascati Manual defines applied research as: ‘original investigation undertaken in order to acquire new knowledge.., directed towards a specific practical aim or objective’ (OECD 2002a: 78). They go on to suggest that applied research is undertaken either to determine possible uses for the findings of basic research or to determine new methods or ways of achieving a specific and predetermined objective.

By cross-cutting the idea of ‘use’ with the pursuit of ‘fundamental understanding’, Stokes (1997) proposes a quadrant model that encompasses ‘pure basic research’, ‘pure applied research’, and then Pasteur’s quadrant - ‘use-inspired basic research’ (Figure 1). The latter sort of research, he argues, should address genuine problems, identified by policy makers and practitioners; such research could thus contribute both to knowledge production and to policy and practice.

Many of the assumptions that seem to underpin both interpretations outlined above remain firmly within an instrumental framework. On the one hand, they see applied research as a means towards attaining pre-defined aims; these aims are external to the process of research itself, and their attainment coincides with solving practical problems. On the other hand, they seem to embrace a hierarchy of knowledge in which the difference between theoretical, propositional types of knowledge and practical, implicit ones is one of status, rather than of mode; hence the prevalence of “basic” research (be it “pure” or “use-inspired”) in both frameworks. Further proposals along similar lines – including the definition of “strategic” research as research, which aims to combine scientific understanding and practical advancement, but also highlights the political goal of achieving change (OECD 2002a, Huberman, 1992) – preserve these assumptions.

Yet models of research that have attracted growing interest over the past few decades fundamentally challenged the idea that applied and practice-based research were instrumental pursuits. Action research (Stenhouse 1985; Carr and Kemmis, 1986; Elliott, 1991), evaluation research, and reflective practice approaches (Schön 1983), for example, illustrate how research may contribute to theoretical knowledge while at the same time being part of changing practice in a way that is intrinsically worthwhile. They foster theoretical, as well as practical, modes of knowledge, through processes whose considerations of value are not bounded by the notion of “expert” achievement of pre-determined ends. Most importantly, they point at how complex the ways are in which research and practice tangle together; let us illustrate this last point by returning to the Aristotelian framework to which I referred in the opening section of this paper.

Aristotle operated with a distinction between theoretical (contemplation), productive (making) and practical (ethical action) modes of knowledge (theoresis, poiesis, and praxis) (Aristotle, EN, 1139a 27-28, 1178b 20-21). The “rational states” (all metalogou - with logos, a rule, or discourse) corresponding to these modes and epitomising their “goodness”, virtue, or excellence, were episteme theoretike (knowledge that is demonstrable through valid reasoning); techne (technical skill or a trained ability to produce rationally); and phronesis (“practical wisdom”, or the capacity to act truthfully and with reason in matters of deliberation, thus with a strong ethical dimension).

What makes applied and practice-based research peculiarly difficult to judge is their stubbornness in mixing their theoretical claims and concerns with practical ones. Though in principle it is possible to look at them from a purely methodological perspective, this leaves out that part of the problem which is in fact the most topical: the relationship with, and projection of, practice and policy, as put forward in applied and practice-based education research. Part of the task of seeking a more rounded judgement is therefore to see whether and how it may respond to the diversity of ways, in which applied and practice-based education research place their emphasis on the relationship with practice (including policy) and with practitioners and users.

This relationship may be seen as delving into the concrete and the particular, i.e. by bringing abstract or general findings to bear on (or trying them against) particular situations. From such perspective, research and practice do illuminate each other, but within a framework in which theoresis, the search for demonstrative knowledge, takes precedence, and “application” only translates as a gliding from general to particular and from abstract to concrete, towards the latter of each pair. Much of the traditional concerns about “basic” research (such as validity, transparency, etc.) remain central throughout this process; and if this is what a specific project attempts to do/ claims to have done, then it is against these concerns that it needs to be judged.

The said relationship may however be seen differently in other research situations. For example, it could appear as one whereby research contributes to the achievement of specific practical ends or to increasing practitioners’ control of particular situations through, for example, recommendations, procedures, guidelines, checkpoints, and ways of intervention. This is the poietic (oriented towards production of outcomes in controlled circumstances) sense of the relationship. Depending on these ends, research may support school practice and educational policy-making (value for use), but also contribute to, for example, strengthening the relative position of R&D “systems” and of individual institutions (strategic and economic value). At its bluntest, this relationship can be seen as merely instrumental: research is instrumental to the choice of means towards the practical ends of practitioners (e.g. managing classroom behaviour, improving literacy, and so forth); these ends are external and take priority, and it is through their lenses that research claims to have any practical value. This instrumental sense of the relationship is pervasive in public discussions of the dissemination and impact of education research in terms of “what works”.

There are however further, and arguably more organic, ways in which research can relate to practice. An important limitation of the perspective outlined above is that it assumes practice to follow even, orderly and determined patterns of relating to research; it thus does little to account for the “messy”, “swampy” character of a great part of both educational research and practice (Law, 2004; Schön, 1983). This is not to say that technical ways of conceiving the relationships between research and practice are worthless, or that they are illegitimate ab principio; rather, it is to say that they must be taken for what they are, with their benefits, as well as their limitations.

From a perspective centred on praxis (virtuous action in the public space), the entanglement of research and practice becomes akin to a way of life: it is first-person action, the striving for excellence of which comes from within, rather than from external ends, gains, or impositions, and the wisdom of which is “not to disregard reason and principle, and yet to attend to them only within the context of an even closer attention to concrete situations” (Dunne, 1993, p. 313).

What the above shows is that the relationship of research with practice, as instantiated in applied research, is rarely as straightforward as the linear progression basic-applied-development implies. It is in fact a complex entanglement at many levels and in many shapes; and as such it does not necessarily fit the assessment technologies currently in place (though it may, to its peril, be stretched or cut to fit the frame).

3. Expressions of excellence in applied research

How would one begin to describe “good” applied and practice-based education research (as defined above), and their relationship to educational practice (and policy), with respect to each of the three domains described above (theoresis, poiesis, praxis)? This section will briefly outline what I see as expressions of excellence in each of the three domains, captured by the concepts of episteme[8] (generalisable knowledge), techne (technical skill), and phronesis (practical wisdom).

The sense of excellence (internal or external) embedded within these three concepts[9] makes them a potential starting point for reinserting the discussions about ‘quality’ into the old ongoing conversation about modes of knowledge and rationality, and the relationship between research and practice. It is through such conversations, I would hope, that the discourse of quality in applied and practice-based research may move away from purely managerial and instrumental frameworks (which Elliott, 1990 associates with a “decline in excellence”), and towards an understanding that is more attuned to the actual diversity of modes of research and of their links to practice.

4.1. Episteme

For Aristotle, excellence in theoresis, the contemplative mode of knowledge, is episteme theoretike, or “scientific” (demonstrative) knowledge involving “judgement[s] about things that are universal and necessary”, and based on syllogism (EN, 1140b, 30-35, 1139b, 25-35).

Plato’s opposition between episteme and “doxa” (opinion), as well as Aristotle’s emphasis on the forward links from episteme to “sophia”, the high-rank philosophical wisdom, may suggest that episteme is superior to techne or to phronesis. This does not however reflect very well Aristotle’s position, as he repeatedly returns to the point that each domain has its own excellence that cannot be reduced to any of the others:

Therefore we ought to attend to the undemonstrated sayings and opinions of experienced and older people or of people of practical wisdom no less than to demonstrations; for because experience has given them an eye they see aright (Aristotle, EN, 1143b, 10-15)[10].

One of the core ideas that emerged powerfully from the case studies and interviews undertaken as part of the ESRC project on which this paper draws (Oancea and Furlong, 2006) was that “traditional” concerns about (abstract, propositional) knowledge and its nature, sources, generation, dynamic and validation, including issues of methodological rigour, should be expected to remain important in considerations of worth, particularly at process level, of applied and practice-based research. This was seen as being in the nature of “scholarly” research in general; nonetheless, in the case of applied and practice-based considerations of worth were not to stop here, but needed to be balanced by further concerns, roughly related to the relationship between research (its processes and outcomes) and practice and policy. Ignoring this in research assessment would be a missed opportunity to understand the epistemic concerns of applied and practice-based research in their own terms.