9343

Negotiating the minefield: practical and political issues in policy research

Janice Malcolm, University of Leeds

One of the major strands in continuing education research has long been a focus on practice – the facilitation and individual experience of learning – which may, intentionally or otherwise, exclude any broader analysis of the social impact of particular forms of practice. Another major strand has been the focus on the actual and desired roles and functions of CE in its political and social context – how things are and how far this diverges from how things ought to be – often with little explicit attention to the changes in policy and practice which might result in desirable change. These two ‘micro’ and ‘macro’ strands have usually run parallel courses and have only rarely been combined to produce a perspective on the social consequences of specific aspects of educational practice. This lack of a more fully integrated perspective has given rise to a situation where arguments about continuing education policy frequently take place on a largely speculative (albeit obviously ideological) level; policies are advocated or opposed as if the protagonists were certain of their consequences, although the evidence from which this certainty derives may be either flimsy and circumstantial, or purely theoretical. Policy research can be seen as a possible point of intersection between these two strands, where attempts can be made to integrate the evidence gleaned from practice with broader social and political analyses.

One possible explanation for the relative scarcity of policy-oriented research in CE is that it is just too difficult to do. It presents particular problems, whether it is research on policy or research for policy – the distinction may amount on the face of it to little more than a difference in funding, but this in itself has implications, both for the ease with which the research can be carried out, and for the political constraints to which it is subject. Often it is concerned with a situation which continues to develop unpredictably as the research is being carried out, and which has no clearly-defined boundaries. A historical perspective is required to enable the situation to be explained in terms of its evolution, and this is likely to present problems which are common to other varieties of historical research. However, much data-gathering in policy research involves examining very recent or even current events and eliciting information from people who may still be engaged in these events on a day-to-day basis, or from ephemeral documentary evidence of questionable status. The most obvious frustration in this situation is the phenomenon of ‘instant history’ – no sooner has a sentence hit the page than the tide of events seems to deprive it of whatever validity it may have had. But this is something one can learn to live with. Of greater significance are the methodological difficulties of researching policy in continuing education, and the political and ethical difficulties which work in this field poses for both the researcher and the researched.

The conceptually slippery nature of policy as a field of study may be one of the reasons why it has remained an unattractive choice for many CE researchers. The arguments about the competing claims of various disciplines to ‘ownership’ of the field, and whether particular areas of policy require their own varieties of analysis, not to mention long-standing disputes about theory, methodology and measurement, might suggest that there are more secure and respectable ways of employing one’s research skills. However as we know there is, from a methodological standpoint at least, no such thing as secure research.

The practical difficulties of carrying out policy research are not confined to the field of education, although they have been more comprehensively explored in other areas. Classical ‘experimental’ designs for evaluating the impact of policy – usually involving control groups who are not exposed to the policy ‘treatment’, and the measurement of variables at the beginning and end of a programme – are well-nigh impossible to arrange, and in any case raise a number of questions about the ethics, desirability and validity of such an approach which have been well rehearsed elsewhere. The simplicity of the experimental design may nevertheless have a superficial attraction for the epistemologically-challenged researcher, despite its major weaknesses, because its scientific antecedents seem to offer the tantalising hope of finding data that might just be construed as facts. Unfortunately it may be the case that, as a number of writers have suggested, the greater the degree of technical rigour employed in assessing the impact of policy, the more likely it is that the net effects will be found to be zero[1]. This might provide an (albeit cynical) clue to the reasons for the alleged lack of rigour in much educational research; if rigour would simply permit us to demonstrate that our work had no discernible impact on anything, we should perhaps be foolish to pursue our research in any but the sloppiest manner possible. Less cynically, one might suggest that the marked preference for exclusively qualitative methods in continuing education research, whilst providing us with many valuable insights, has left the field unbalanced and unnecessarily difficult to defend and promote in policy terms.

Policy researchers in continuing education often find themselves working backwards in experimental terms. They may start by considering a current situation – the ‘results’ of policy – and use this as a basis for discovering the original aims of a policy initiative, the extent to which it has been ‘successfully’ implemented, and what its effects have been. They are thus piecing together the theory of the policy, which may not be congruent with their own theoretical perspective, and the processes involved in implementation, on the basis of evidence which is likely to be both highly subjective and incomplete. At the same time they must maintain their own theoretical perspective in order to evaluate both the theory and the processes involved; they are therefore frequently working on two levels simultaneously. To give the problem a more sophisticated formulation, they probably wouldn’t have decided to go there in the first place (if ‘there’ can be identified at all), but if they had, they wouldn’t have started from here. This naturally magnifies the usual difficulties of inferring causality between policy and outcomes, identifying the ‘slippage between planning and implementation’[2], distinguishing specific conditions for outcomes, and the perennially-vexed question of measurement, to several times their normal size.

Methodological problems, of which those cited here are only examples, might be seen as a mere irritation or even an irrelevance if policy were simply an abstraction and of interest only to a small and obscure group of academics; one could argue and theorise indefinitely without ever impinging on the lives of others, or indeed reaching even the most tentative of conclusions. However, academics in continuing education actually engage continually in debates with others about whether and why particular policies should be adopted, retained or discontinued, and how they should be implemented. Policy decisions about continuing education, whether they are taken at an institutional, a national or an international level, must eventually have tangible consequences for individuals and for social groups, and this is presumably one of the reasons why we bother to argue about them.

One response to methodological despair is to point out, with some justification, that research is only as sound as the theory behind it. The problem here is that, regardless of whether the researcher’s theory will withstand scrutiny, it may not be the only theory applied to the research findings. Research about policy always has at least a dual function: it helps to inform our own arguments about which policy initiatives are likely to produce particular outcomes, but it also provides policy-makers and other interested parties with evidence about the utility and social consequences of particular educational initiatives. The researcher may have very little control over the uses to which his or her work is put; and this is where questions of morality and political expediency begin to loom large, both for the researcher and for the subjects of research. These are of course not new questions, either in education[3] or in social science in general[4], but one could argue that they require further consideration in continuing education, a field of policy in which the bases and processes of formulation and implementation have been only partially explored, and are now being transformed.

A practitioner colleague recently commented on a draft paper of mine concerning policy issues: ‘Don’t ask questions to which you don’t want to know the answers’. This defensive and expedient view was perhaps illustrative of a common problem. I did indeed want to know the answers, but whatever they turned out to be, they could have no consequences for me other than intellectual ones; a minor revision of my Weltanschauung, or an interesting new angle on my research, perhaps. But for others – practitioners, managers, students, institutions – the answers, depending on how they were interpreted and utilised, could have far-reaching consequences which might be neither predictable nor necessarily positive. Of course it might be argued that to imagine one’s research will have any consequences at all is either narcissistic or naive, or both; Booth’s avuncular advice is that ‘... while it is right for [policy researchers] to believe in the value of what they do it is arrogance to have too much faith in its importance.’[5] Clearly it is possible that no-one will read one’s work – but perhaps John Patten will, and his interpretation of it may differ markedly from one’s own. Finch cites the example of a researcher, a supporter of comprehensive schooling, being discomfited to hear her work being cited on a radio programme by Brian Cox as evidence that educational selection should be retained[6]; the meaning attributed to research findings is clearly at least partially dependent upon the ideological stance of those who use them. It is therefore equally naive to attribute so little practical significance to one’s work that the range of possible consequences is ignored. The choice for the researcher is to abandon the issue on the grounds that some questions are best left unasked, or to take the risk of asking the question in the hope that the ‘answering’ process may ultimately have positive consequences, either in terms of a clearer understanding of the consequences of educational processes (better theory), or in the form of improved educational provision (better practice).

Questions about the power relations between researchers and their subjects, accountability in research and the possibility or desirability of ‘ethical neutrality’ have exercised the minds of social scientists for many years. Two of the roles commonly adopted by continuing education researchers as a means of dealing with some of these questions are those of the advocate or of the committed and active participant; these roles may feel more comfortable to the researcher than the adoption of a spurious neutrality, particularly since educational research, unlike much social science research, is frequently carried out by practitioners themselves. They may also help the researcher to avoid uncooperative responses from wary or vulnerable subjects – and in policy research the subjects may also be practitioners or other ‘stakeholders’ in the policy process, who perhaps have good reason to be wary. However the adoption of such a role clearly raises a number of political, ethical and indeed epistemological problems; in extreme cases it may give rise to ‘the danger of becoming excessively committed to a ‘cause’ to the point where integrity is abandoned for the blind pursuit of an obsession’[7]. A more common and less extreme scenario is that the researcher’s explicitly partisan stance sanctions a selectivity in the use, manipulation and interpretation of data which can be sufficient to cast considerable doubt on the validity and applicability of the research, although the researcher’s ‘integrity’ remains intact.

This approach rarely throws up any unpalatable answers; that is not what it is intended to do. It is therefore more likely to confirm than to challenge the researcher’s own theoretical framework, which in turn can lead to unhealthy professional entrenchment. This leaves continuing education in a weak position when its practices and professional wisdom are subjected to unsympathetic or even hostile interrogation, as has been happening recently. It is now less easy than it once was to use a cloak of professional expertise or political purity to protect ourselves from ‘inappropriate’ questions about the social (or even economic!) value of our work; we are forced into a position where justifications for practice have to be provided, often in terms we consider to be inept. However, it is preferable for those of us in continuing education proactively to ask difficult and delicate questions, and to attempt to provide answers for ourselves, than to react indignantly and defensively to outsiders who have the temerity to demand answers which we do not have. The process of doing this is deeply problematic and also risky; it requires us to explore more fully the subtle distinctions between an admitted lack of neutrality and a straightforward surrender to bias, and to seek an unaccustomed distance between ourselves and our practice. A clearer and better-substantiated theoretical basis for our contributions to the policy process is necessary if our interventions are to amount to more than a pragmatic, and possibly misguided, plea for the retention and extension of that which already exists.

[1] PH Rossi and HE Freeman (1985), Evaluation: a Systematic Approach. Sage

[2] H Chen (1990), Theory-Driven Evaluations. Sage. p56

[3] eg J Finch (1986), Resea