THE TYRANNY OF THEORY AND THE STRANGLEHOLD OF PARADIGMS

The role of research methodology and theory in Teacher Research

Derek Woodrow

Manchester Metropolitan University

Paper presented at the British Educational Research Association Conference, Cardiff University, 7-10 September 2000.

This paper represents yet another plea for reconsideration of the role of research methodology in practitioner writing and the avoidance of dogma in the presentation of theory. It arises from the experience of reading over two hundred master’s dissertations and Ph.D. theses during the past two or so years. The views have become increasingly pertinent because they provide a commentary on the ‘Best Practice’ developments of the DfEE, which invoke ‘research by teachers’ as if it is not problematic. Like the ‘Best Practice’ proposals all the dissertations and theses were defined in terms of ‘doing research’ but the outcomes occasionally wandered some distance from the usual definitions of research. Peter Foster (1999) reviewed the TTA teacher researcher outputs, which were the fore-runners of the Best Practice proposals, using three criteria : clarity, relevance and validity. He found that about 20% (5 out of 25) he would not classify as research since they ‘did not seem to be projects in which the key goal was the production of knowledge’. Their data collection ‘did not appear to be particularly systematic or extensive, they were ‘ personal descriptions, or justifications for, their own practice’. Foster goes on to find serious problems with much of the reported work as ‘research’ projects. Foster’s criteria are eminently reasonable ones to adopt (though one might want to define some different ones, or use different terms) and the validity of many of Foster’s judgements is clearly apparent. His critique, however, does not arise from a consideration of the practical value or professional worth of the activities but from their claim to be ‘research’ actions which implies of necessity the application of strong requirements. Indeed the TTA set them up as ‘high quality research’ which would ‘add to the stock of knowledge available to teachers and the research community’. In the same way one suspects that the addition of a requirement for ‘Best Practice’ activities to be done in relation to higher education is based on an assumption that H.E. will provide the research underpinning, a problematic assumption (see Hammersley, 1993)

The ‘striking rate’ for dissertations as research texts is somewhat higher than this with few not being easily classified as research intended. In nearly all of them, however, you read the same transposition of material from the standard research methodology texts – Carr and Kemmis, Denzin and Lincoln, Bassey, etc. There were occasional departures from this picture, but probably less than 5%, though a useful example of positive critique can be found in Mellor (1998). In almost every case we read that research can be categorised into

  • positivistic - usually related to the gathering of numerically analysable data through questionnaires or Flander’s type observation (not Flanders and Swann!)
  • illuminative – which has two subgroups, usually chosen from three potential alternatives : critical research c.f. action research c.f. interpretative. This also involves questionnaires (variously analysed) and interviews (almost always semi-structured)

Little of this seems relevant to the work which appears in the dissertations, and such categorisations effectively sets up false premises of ‘choice’ and free decision making in the research work which follows. These research paradigms are established in the literature in the context of formal research projects following carefully structured samples and pre-chosen value free enquiries carried out by independent and unprejudiced researchers often learning the research trade. Hammersley (1993), in an article more directed at intra-research politics, nevertheless argues cogently for a distinction between researcher que researcher and teacher as researcher. In practice, of course, the research in the dissertations/theses is carried out by practitioners, who approach the research from a position of already existing knowledge and beliefs and are using research techniques as a means to an end. These practitioners have a clear commitment to the issue under consideration rather than their future research careers. The enquiries are also often carried out within the structure of a course with built in paradigms, expectations and assumptions which constrain the student into particular models and responses. This leads to a number of charades and games of pretence in the writing. It also leads to misconception and misunderstanding.

It is not a criticism of the courses, nor the research method texts. They present the methods and criteria for formal research clearly and directly, but like all literature/course content it only becomes appropriate when interpreted into the context of the work in hand. The methods employed in practitioner enquiry are the same as those used in ‘pure’ research, and the questions to be asked – validity, reliability, authenticity – are still relevant but just as the different research paradigms interpret and respond to these terms differently so too must practitioner research. For many practitioners, too, the time-scale of the enquiry is usually externally determined, they often begin their research before they have fully organised and planned the work and have of necessity to indulge in retrospective planning and justification. Emergent research is often the keynote, but formal research usually demands early structuring. Students also come to courses with pre-formed expectations of what it is to do research. They assume that ‘objectivity’ is fundamental, that they must justify their work in relation to accepted criteria, that experts know. Whilst course tutors and some texts do encourage the confidence to be ‘real’ and ‘authentic’ many practitioners lack personal confidence in the research domain and do not transfer their professional confidence across. It can be extremely difficult to convince them of their own authority and competence in carrying out their enquiries. Their audience is rarely a critical national academe but their fellow teachers, their school and their context. In the terms of an engaged practitioner it is ‘authenticity’ which takes priority over reliability and face validity which counts over technical validity. In the terms coined by Marion Dadds (1995) (though yet to be fully developed) it is often ‘democratic validity’ which matters overall, do your professional colleagues recognise the significance and agree to identify with the outcomes. This is particularly the case where, as so often, the enquiry is an evaluation and prescription for local action and policy.

It is difficult not to be involved in the trapping of students within assumed traditions and expectations often imported from one professional context (professional action beliefs) to another (research enquiry). It is closely connected with a short paper on the problems of faiths – those beliefs which can countenance no alternative issues and no ambiguities (Woodrow, 1995). Practice is full of ambiguity and contradiction, of paradoxical actions. Theory, and theoretical research positions, usually has no truck with this messiness – even postmodernism seems to me to be trapped into a form of such determinism in that it can countenance no certainties and no absence of ambiguity (Bridges, 1999). I have previously argued (Woodrow, 1995) that the practitioner regularly reacts and acts impulsively, responding to different motivations and beliefs from moment to moment. This flexibility/lack of direction is viewed with antipathy, as undesirable. It is, however, normal and real and as such should be accepted and applauded for its virtues! This is a different position from the theoretician who needs to pursue the theory unswervingly to its limits in order to uncover what might be its consequences (Lerman, 1998).

What follows are some of the issues, which reflect these concerns and which contribute data towards a more consistent set of conclusions. They are relatively random observations at this stage but will hopefully prompt the development of more appropriate criteria for practitioner – not a new quest but one still some distance from its destination.

It is clear that practitioners do not enter a research project with open minds. Their experience almost always provides them with some beliefs and expectations. Fundamentally they care about the outcomes – not as an abstract truth but as proof (or as aids to be used by them) of their effectiveness or efficacy – they are not independent researchers whose only quest is simply to reveal the facts whatever they are. The essence of a positivistic approach is the belief that there are truths available, and despite their protestations that they are embarking on illuminative research paradigms most writers are seeking re-assurance that they are correct in their views. Their hypotheses reveal their commitment and their day-to-day professionalism demands it. Alternatively they embark on evaluations of projects to which they are committed and in which they believe and from which the specific outcomes constrain the choice of alternative analyses. In this sense most practitioner research is positivistic. The misconception that if you avoid counting or graphing then you are being interpretative is an interesting confusion of paradigm and method. This is one reason why questionnaires followed by semi-structered interviews predominate – since they are clearly susceptible to researcher construct and control and to embodying researcher bias. It is not unusual for questionnaires to lead to fairly bland responses but for interviews to show more intensive responses. The reasons for this are not difficult to see, not only are the questions devised in order to provide confirmatory responses, but the interviewer usually seeks to ‘put the interviewee at their ease’ by stressing their common interests. Headteachers interviewing fellow headteachers, teachers indicating a common concern for the situation in which classroom assistants find themselves, a black interviewer empathising with the shared experience of racial harassment, all provoke responses which create interviewer empathy. It is rare for an interviewer to seek confirmation of the opposite of what they want to know! Illuminative research by practitioners frequently involves shining a narrow beam on one element of a complex picture the actuality of which is never fully evident to the researcher (for reasons elucidated by Hammersley, 1993) or the reader. They are most regularly rather like those pictures of everyday object taken from peculiar angles, or a detail from a famous picture from which you have to reconstruct the picture it was drawn from.

This prior commitment and practitioner knowledge makes real critique very difficult. Being critical – as opposed of course to criticising - requires an openness of mind to explore the alternatives whatever they are and whatever their implications. To quote a ‘complaint’ by Martin Hammersley (1999) ‘ today one rarely gets the sense of engaging with the data to explore the different meanings it could convey…instead dollops of data are doled out as if their meaning were obvious and univocal. This is what Glaser and Strauss criticised many years ago as ‘exampling’…’ . Whilst this is a legitimate complaint for a researcher to make, for a practitioner there are other grounds for accepting a particular interpretation as ‘correct’ (though this leads one on towards the tyranny of accepted theory – see below) and can justify, in the appropriate context, a different (non formally researchy – see Foster, 1999) way of proceeding. In his analysis of the TTA teacher researcher outputs Foster identifies a number of concerns which differ from that which would be generated by a formal research proposal. Many of the activities appear to fall short of what would be required of good research modelling – but as I keep re-iterating there are other reasons for the activity. The use of research methodology does not in itself guarantee research quality, and one of the complications is that practitioner enquiry uses many of the same tools, but to a different purpose. This distinction needs to be clarified if we are to build a substantial debate onto practitioner derived theory.

What matters, of course (what a revealing phrase), are the questions which the application of a particular paradigmatic methodology raises – questions of reliability, validity are positivistic questions which illuminative research finds either difficult or irrelevant. Authenticity and relevance are difficult questions for positivists. A recent paper which one of us reviewed was very clearly articulated and presented its research approach very fully and justified its outcomes. But in seeking such validity and reliability it lost all sense of what it was looking at, defining categories which became too wide and unfocussed to speak to the practitioner. What is often interesting is not whether the appropriate tests have been applied but whether or not for a particular piece of research the question is appropriate. Definitions of case studies or of ethnographic research issues might or might not be relevant. In a recent thesis exploring the life experiences of a lecturer using techniques of psycho-analysis as the main research tool an attempt was made to describe it variously as a case study of a single person, and as an ethnography of a single person. There was a sense that the researcher felt it imposed upon her to identify her research with the pre-specified cannon of research descriptions. Far more interesting is to explore the problematic issues raised by such categorisations within the research as carried out. Many practitioner researchers explore the nature of collecting samples when they have no alternative sampling possibilities available to them so that it becomes a pre-determined collection rather than a sample. Indeed in much practitioner illuminative research there is no sampling but a small local population. There is rarely sufficient data available to attempt to show that this mini population is in any sense representative of a wider constituency. They may, of course, be really presenting their research c.v., trying to establish with the examiners that they know what research is about. There are, however, other ways of doing this. The thesis/dissertation should be about the problem/issue being investigated and the ability to bring appropriate tools to bear on that context and not essentially a response to an examination question.

For me one of the most outstanding examples of undigested application of a research concern reduced to methodological irrelevance is that of triangulation. For me, a one-time mathematician, a triangle is a clearly defined object – uniquely determined not by any old three pieces of data but only by specific collections of three pieces of data. Coincident points, coincident lines, lines in different planes all confuse the assumption that triangles are made from three points or three lines. Very often the same questions asked of two different groups is presented as triangulated data – with little concern as to whether they represents really distinct views of the issue (i.e. three points on the same line or two parallel lines) or whether the two samples come from the same population (lines in different planes). Often the same questions are asked as a questionnaire and in interviews and this is presented as triangulated data (two coincident points). The standard categorisation into different types of triangulation (see e.g. Cohen and Manion (1989): method/theory/data/investigator) almost always the leads to an appeal to the use of multiple methods as representing in itself a guarantee of triangulation of outcomes. It is unfortunate that this clarity in description of potential methods has served only to obscure the reality of the question of validity which triangulation seeks to answer. It has become a meaningless (and often thoughtless) application of a standard statement whose appropriateness is never questioned. Many practitioner researchers claim to be conducting illuminative/interpretative research for which triangulation may be an irrelevance, but by their very invocation of the term can sometimes betray positivistic aspirations. What is rarely grasped is that the notion of triangulation merely raises very clearly the issue of validation and reliability in the context of qualitative research methodology, it does not itself resolve that issue. Multiple methods may give a means of performing this task, as may using different researchers, equally however an appeal to other research and, in this context, an appeal to other educational theories or appeals to practice may do the task even more effectively.

Action-research is yet another confused concept. A number of practitioners are attracted to ‘action-research’ as a phrase redolent with their concerns and professional context. Belonging to this club is a worthwhile label. Yet so very rarely does a dissertation or thesis contain any complete (or even any attempt at) plan - action – evaluate – replan cycles, a fundamental of most action research assumptions. Action research is seen as synonymous with evaluating an action rarely with any reviewal analysis of the action phase, indeed any action phase is very often not amenable to influence by the researcher/teacher or even under any accessible control. The attraction of ‘action-research’ to practitioners is that is does imply pre-commitment. The actions explored are free from the research itself – only their outcomes are tested and assessed – and as such can therefore pre-exist the start of the research. The acceptance of the actions is based on professional argument and professional beliefs. In this way it seeks to meet the necessary context of practitioner research in its prior commitment and beliefs. My quarrel is not with the action research– still less with it’s methodology – but with the casual justification of uncritiqued evaluations under this heading without respect for the principles invoked by the ‘action-research’ label (Anderson and Herr, 1999). It illustrates another instance of the use of naming methodological theories to justify unreflective and uncritical actions. It represents the trapping of students into the tutor’s terminology and jargonised language, providing a spurious justification and authority for their work. It is so often in-authentic.