The heuristic and holistic synthesis of large volumes of qualitative data: the TLC experience.

Phil Hodkinson, Gert Biesta, Denis Gleeson, David James and Keith Postlethwaite

Paper Presented at

RCBN 2005 Annual Conference

Cardiff, 22nd February 2005

Abstract

The Transforming Learning Cultures in FE project faced three daunting problems in making sense of its qualitative data. Firstly, the volume of data was a problem, with over 7001,000 taped and transcribed interviews, notes from between 100 and 200 observations, and 16 detailed tutor diaries. Secondly, the research team consisted of 30 people, academic and FE based, located in 8 different institutions. Thirdly, there was a problem of scale of analysis – making sense of 16 learning sites, of the individual tutors and students within them, and of the more macro FE context. This paper explains why we did not adopt some of the well-known conventional approaches to qualitative analysis based upon the coding of transcripts. It goes on to describe, explain and evaluate our alternative process. This was staged, beginning with individual site descriptions and some individual student and tutor stories. We then heuristically used two ‘instruments’ to analyse both the learning cultures of these sites, and of interventions which changed the cultures and learning in them, and finally, moving beyond these instruments to develop an overarching theoretical position, together with an integrated account of learning, presented differently at different scales of examination. At its heart, we would describe our approach as one of collective and collaborative interpretation and synthesis, rather than the more common approach of analysis.

Contact details

Prof Phil Hodkinson,

The Lifelong Learning Institute,

Continuing Education Building,

University of Leeds,

Leeds, LS2 9JT,

UK.

Tel: 0113 343 3223

Email:


Introduction

This paper has two purposes that are interwoven. The first is to explain the solution to a logistical problem encountered in the Transforming Learning Cultures in Further Education (TLC) project. That is, how to make sense of the huge volume of qualitative data that the research was generating, together with overlapping challenges rooted in a large, diverse and dispersed research team. The second is to use these specific problems to address one of the key points of contention in debates over educational and social science research methodology in the last 20 years. This point of contention is the extent to which methodology can and should be objective and neutral (I think the word ‘objective’ is slightly wrong here - perhaps ‘non-contaminating’? Eisner wrote about ‘procedural objectivity’ as partly illusory and that it never gives you ‘ontological objectivity. I could look up ref if seems useful, now or later) , rather than directly contributing to the construction of ‘findings’ from the data. Here, we are addressing that macro issue through a specific focus on what is commonly termed data analysis. The first part of this paper establishes the nature and background to these two related problems before going on to describe and explain the approaches adopted by the TLC. We then conclude with a more general discussion about the second, and deeper, methodological issue.

The TLC Project

Understanding the logistical problem faced by the TLC project requires an understanding of the nature of the project, what it was trying to do, and how it was organised (see Hodkinson and James, 2003, for a fuller account). Our starting point was that learning in FE, as elsewhere, is complex, and we set out to research that complexity, rather than to focus on one or two key variables. In the TLC we us the term, ‘culture’, to indicate these complex relationships (James and Diment, 2003; Hodkinson et al., 2004a, b). The project aimed to examine, within a variety of settings, what a culture of learning is and how is can be transformed, based upon an acceptance that ‘learning and thinking are always situated in a cultural setting, and always dependent upon the utilization of cultural resources’ (Bruner, 1996, p 4). To conceptualise this, we turned to the work of Pierre Bourdieu (e.g., Bourdieu, 1977; 1998; Bourdieu and Wacquant, 1992; Grenfell and James, 1998). Bourdieu’s theory-as-method provides a relational approach to learning that emphasises the mutual interdependence of social constraint and individual volition. Social practices are understood as having both an objective and a subjective reality at one and the same moment. Complex human relations and activities can be understood via theoretical tools that enable the ‘unpacking’ of social practices in social spaces: examples of these ‘tools’ include the notions of habitus (i.e., a collection of durable, transposable dispositions) and field (a set of positions and relationships defined by the possession and interaction of different amounts of economic, social and cultural capital). Habitus and field are mutually constituting, a point of considerable practical importance to the way that the actions of tutors, students and institutions are studied and understood. Put more concretely, our starting assumption was that learning would depend upon the complex interactions between the following factors, amongst others:

·  Students’ positions, dispositions and actions, influenced by their previous life histories

·  Tutors’ positions, dispositions and actions, influenced by their previous life histories

·  The nature of the subject, including broader issues of ‘disciplinary identity’ and status, as well as specifics such as syllabus, assessment requirements, links with external agencies or employers, etc.

·  College management approaches and procedures, together with organisational structures, site location and resources

·  National policies towards FE, including qualification, funding and inspection regimes

·  Wider social, economic and political contexts, which inter-penetrate all of the other points.

To organise data collection, we adopted nested case studies. Four case study FE colleges were selected and the design of the project negotiated with their principals and key staff. Each college was paired with one of the four host universities in the project. Within each college, four specific sites of learning and teaching were identified, providing 16 sites across the whole project. By ‘site’ we meant a location where tutor(s) and students worked together on learning. The sample is not representative of the whole of FE provision, but it does provide a wide enough range to allow either significant variations between sites, or significant common issues across them, to be identified. The main tutor in each site was funded for two hours a week, to participate in the research. These ‘participating tutors’ attended regular meetings and workshops with their host university/college research team, were encouraged to keep reflective log books or diaries, and to observe each other’s sites. They were encouraged to innovate as the research progressed, and where new approaches were attempted the research provided on-going evidence of what happened.

In addition to the participating tutors, each local research team has three core members: one of the project directors, nominally for one day per week; a half-time academic researcher, employed by the university; and an FE practitioner/researcher, seconded for two days a week, to work on the project. In addition to working with the participating tutors, these core researchers interviewed about 6 students per site twice a year, using semi-structured interviews; and observed the practice in each site on regular occasions. Observations were unstructured. Participating tutors were also regularly interviewed, and given periodic feedback about what the research shows about their particular site and more general issues across the project as a whole. They also kept detailed diaries for the duration of the project. In addition to these 16 qualitative case studies (which eventually became 17, as one participating tutor left and was replaced by another in a different site), the TLC also used regular questionnaire sweeps, to generate a broader picture of the sites. One director and one part-time researcher work exclusively on this part of the project. In this paper, it is the qualitative work that is addressed. It should be noted that this was also a project with a relatively long time frame – four years. This meant that we could track changes effectively, but itself was one of the reasons for the volume and complexity of data that were generated.

In approaching the analysis of this data, the TLC faced some difficult problems, each of which is also a strength. Firstly, the sheer volume of data, that gives us such rich and detailed pictures of learning, is overwhelming. By the conclusion we will have about 600 student interviews, 100 tutor interviews (note Abstract estimated 1000+), 16 log books, between over 5100 and 1,000 sets of observation notes, notes from local team meetings and discussions, interviews with a small number of college managers, etc. Our second problem came from the size and diversity of the core research team – 14 people, all part-time, with different professional roots and identities, split across four geographically distant partnerships. This gaive a valuable depth of understanding to all our work, as sometimes contrasting perspectives are blended. However, it does make the core team difficult to manage, and there are tensions when some members feel that their perspectives or needs are marginalised. All team members had to balance their TLC activity against the rest of their working and family lives.

The Problem of Method

As qualitative research progressed, early approaches to methodology were rooted, knowingly or unknowingly, in standards previously set for quantitative work (Denzin and Lincoln, 2000). That is, much of the methods literature attempted to address the holy trinity of validity, reliability and generalisability, especially validity. For many, the adoption of a rigorous method was seen as the main way to preserve objectivity. That is, to establish the credibility of qualitative findings, through an assurance that they represented a true picture of the subject being researched, rather than a biased personal perspective of the researcher. Either explicitly or implicitly, these approaches assumed a realist position, namely, there is a real world out there, separate from the researcher, and the job of researchers is to discover what that real world is like, through rigorous objective method. Thus Glaser and Strauss (1967) and later Strauss and Corbin (19987????) developed grounded theory, to use the method of constant comparison to arrive at a single, true understanding, as part of bottom-up theory construction. We are not concerned here with this on-going debate in is broadest sense, though some of us have addressed these issues elsewhere (Biesta and Burbules, 20043; Hodkinson, 2004). Here we focus explicitly on analysis.

Most of these realist approaches to the analysis of qualitative data take the term analysis literally. Analysis means to examine in detail to discover meaning, but also to break down into components or essential features. That is, almost the standard way of approaching realist analysis is to break down interview transcripts, observation notes etc. into standard component parts, for example through coding. One argument is that this forces the researcher’s attention to detail, helping to avoid being biased by first impressions. Often, as in grounded theory (Strauss and Corbin, 19919987????) or more eclectic analysis guides, such as Miles and Huberman (1994) this initial coding is followed by one or more analytical algorithms – set and predetermined procedural protocols that work on the coded data to develop patterns and produce both understanding and the single most plausible and verifiable truth. Such algorithms, it is claimed, tame the subjectivity of the researcher, allowing the data to speak almost for itself. The skill of the researcher is to choose the most appropriate algorithms, to encourage this to happen. It also helps, from this perspective, of more than one researcher is involved. If, say, three researchers all apply the same appropriate algorithms, and then agree on the correct interpretation of the results, the outcomes are arguably as objective asnd qualitative research can be.

However, there is an alternative type of approach, which is interpretative rather than realist. Within this second approach, rather than a biased individual whose subjectivity is to be tamed, the researcher is seen as a person who actively constructs a meaningful story out of the data, maximising the benefits of their existing experience and insights. The emphasis here is on research as construction, rather than discovery (Smith and Deemer, 2000). Thus, Wolcott (1994) focuses on transforming qualitative data – changing it into something meaningful. Moustakas (1990) writes of what he terms heuristic analysis – making sense of data through imemersion, then standing back and allowing the sub-conscious to work. As Colley (2001) suggests, synthesising data into a constructed story may be a better way of describing what is involved than is the term analysis. The TLC directors were more closely aligned to this interpretivist approach than to the alternative of coding etc. Our decision to use Bourdieu’s thinking to structure the research project signalled this standpoint. Implicit in that decision was the understanding that the research was framed within a particular way of viewing the world, even if, as we have always asserted, we wanted to use the research to challenge our pre-assumptions. We return to this broader debate about methodology at the end of this paper. Before doing so, we focus on the TLC’s approach to the two practical problems we faced.

Analysing or Synthesising the TLC data

We begin this section hypothetically. Had the TLC team been wedded to a more conventional procedural analysis of data, the scale and complexity of the project would have offered amazing hope, underpinned by practical impossibility. The hope would come from the size and diversity of the research team. If we could get 14 people, including five leading academics, five professional researchers and four seconded FE practitioners to completely agree about every finding generated from such a large and diverse sampling base, then those agreed truths would have been robust indeed. The impossibility arose from the very things that would have demonstrated our success.

A diverse and dispersed research team is arguably like any other similarly sized diverse and dispersed team. Agreement is seldom total, and shared understandings have to be worked for. At different times, all of us have felt slightly out of step with what has been agreed, and occasionally one or more of us has felt significantly out of step. We had to work hard not to resemble the committee that was asked to design a horse but came up with a camel. Agreements were often compromises, often strongly led by those with most power. In general, we found that we could agree most major, broad-brush findings more easily than the detail. Often, that detail had to be taken partly on trust, because in every case, those researchers who had collected the data, and then their two close geographical colleagues, could determine what the details meant in ways that the others lacked the evidence to challenge.