1

Imaginary Epistemic Objects: Educational care planning in interagency work

Steven D. Brown

with

Harry Daniels, Anne Edwards, Jane Leadbetter, Deidre Martin,

David Middleton, Paul Warmington

Paper presented at ‘Public proofs: Science, technology & democracy’, 4S/EASST meeting, Paris, August 2004

This paper is drawn from a project which analyses work based professional learning amongst practitioners in UK Local Educational Authorities as they shift towards an overarching framework of multi-agency practice. This shift is an attempt to engineer what has been referred to as ‘joined up responses’ amongst the diverse range of practitioners involved in the provision of services for children and families who are deemed to be ‘at risk’. These practitioners may include – but are not limited to – social workers, child protection officers, police services, educational providers, housing officers, General medical Practitioners and Speech and Language Therapists. The strategy – as outlined in a government consultation document (i.e. Green Paper) Every Child Matters and in current emerging legislation in the UK – is that these previously distinct services should be not merely jointly co-ordinated but ought instead to routinely work together, with the provision of services cutting across established professional boundaries.

The impetus for this significant shift in service provision at local authority level was the

report into the murder of a nine year old child Victoria Climbie, following the prosecution of her family carers for the abuse and neglect which resulted in her death. The significance of this case, beyond the obvious tragedy, is that rather than being hidden from view, Victoria was ‘known to’ – that is registered as a ‘case’ by – twelve separate social, medical and juridical services, including specialist child protection teams. The report concluded that basic failings in information sharing amongst frontline staff meant that none of these service providers had established the full details of the case and hence had been unaware of the how ‘at risk’ the child was. In particular, the report points to the decentralised nature of storing data on children at local authority level, and the accompanying distribution of discretion amongst groups of professionals, around such cases. Hence appropriate action was not taken in time to prevent Victoria’s murder by her supposed carers.

My focus today is not the viability of this shift towards towards interagency working that is currently taking place within the UK. What I want to analyse instead is the epistemic field in which the shift is occurring. That is, I want to understand something of how ‘the child’ who is ‘at risk’ becomes a conceptual object for this array of service providers. To do this I will first look at the Green Paper ‘Every Child Matters’ wherein the rationale for interagency working is mapped out, and second turn to one of the methods (Educational Care Plans) by which children within the ‘at risk’ spectrum are currently registered as cases.

The argument I want to make is that there are two separate but interlinked problems in how children become ‘known to’ service providers. There is initially the concern with constructing ‘children’ as an abstract, default category around which ‘universal services’ must be provided, by right. I argue that the construction of such a category is not achieved simply by the legislative implementation of a discourse of ‘children’s rights’ but requires also the deployment of range of technics which make children knowable in principle. Following the theme of the symposium I will refer to these as ‘epistemic technologies’. However, the peculiar nature of these technologies in this instance is that what is produced is a category of beings that is so abstract, so emptied of meaning, and ultimately unstable and self-contradictory, that a further set of technics are required to recode the category in such a way that individual, particularised ‘children’ can be created as ‘cases’.

Children as epistemic objects

Already we have some sense of what it means for a child to be an object of professional concern. The individual child who is ‘known to’ professional welfare services may be deemed to be ‘at risk’ of failing, neglect or abuse. In serious cases the child may become ‘looked after’ by local authorities. These categories constitute a basic set of transitions in the degree and scope of the interventions welfare authorities make in individual cases. They establish what might be seen as a series of progressive deviations from the normative, unmarked category of the child who is not (as yet) ‘known to’ welfare services. The Every Child Matters document depicts this in the following way:

At the apex of the triangle are the set of cases of children who welfare services ultimately fail to protect. This is the most serious, the most tragic deviation from the normative. Immediately below this are a significant body of cases (25,700) where children are registered as in need of protection, whose welfare is the subject of continuous, ongoing intervention by a range of social, medical, educational and juridical services. Below this are children who are ‘looked after’ by local authorities, but whose needs may be less extensive (i.e. they may not be actively ‘at risk’). The following two tiers represent children who may have a variety of needs, including housing needs or special educational needs.

The least interesting category, apparently, is then the very bottom tier which depicts the estimated 11 million children in the UK. If we understand the pyramid depicting the proportion of children who are likely to be ‘known to’ welfare services at varying degrees, then we might say that the bottom tier constitutes the overall pool of cases which may be known in potentia. That is, a given child who would fall only into this tier would constitute a case that is ‘as yet unknown’ to local authorities. The idea of being a case in potentia is written large into the Every Child Matters document. One of its principle calls is that:

We need to ensure we properly protect children at risk within a framework of universal services which support every child to develop their full potential and which aim to prevent negative outcomes. (ECM, 2003: 6)

In this sense, becoming ‘known to’ local authorities is a matter of degree. All children are potentially ‘knowable’ and possible objects of the ‘universal services’ which the document promotes. The justification for this is that intervention may be required should the child ‘fail’ to ‘develop their full potential’. So the overarching need which the services are based around is essentially defined as failure to attain predicted markers of development.

The concept of development is then essential to how the child becomes an epistemic object. Child development as a professional discourse may be simply understood as a progressive set of measurable markers of ability and attainment which constitute normative, temporally structured evaluative and/or statistical criterion. In recent years, the discourse, understood in this fashion, has been subject to relentless deconstruction, notably by critical psychologists who typically view developmentalism as a set of strategies by which childhood becomes the manageable target of power relations (see Burman, 1994; Morss, 1995; Stainton Rogers & Stainton Rogers, 1992).

This is indeed transparently the case in Every Child Matters. However, rather than question the extent to which ‘development’ can be taken to be a meaningful concept to understand the lives and experiences of children, I want instead to note that in terms of how children become ‘known to’ local authorities, development is negatively defined. That is, failure to develop is the grounds which warrant concern. Failure is itself established as a deviation from a normatively defined trajectory of attainment. This trajectory – what you might call the lifecourse of the ‘universal child’ (although universal is of course nationally defined) – is a set of formalisms, or categories which refer to no child in particular, but rather what might be seen as generally possible.

Which gives rise to the following paradox. To the extent to which a child does not deviate from normative developmental criterior, they remain to a degree ‘unknown’ to local authorities. Indeed the more ‘successul’ a child is in matching these formal markers, renders them progressively ‘unknowable’ until they eventually exit the field of epistemic concern altogether at age 19. Unknowability here means not becoming a case, being unspecified, as yet unmarked. Failure of attain normative markers, by contrast, through a series of progressive deviations from the developmental trajectory, renders the child not merely a case, but also as ‘known to’ more authorities. Thus ‘knowing’ is here the product of failure. Indeed one might say that in order for a child to be known at all it is essential that it been seen to fail in some sense – that is, register a deviation from one or another formal marker in the developmental trajectory. And if the project is the provision of a framework of universal services in which all children may be potential cases then it ultimately becomes critical to expand the range of markers such that all child is in some sense ‘failing’. Consider for example the following list of practitioners that Every Child Matters (2003: p.62) envisages may be come to be involved in a given case:

  • Health visitors
  • GPs
  • Social workers
  • Education welfare officers
  • Youth and community workers
  • Connexions personal advisors
  • Education psychologists
  • Children’s mental health professionals
  • Speech and language therapists and other allied health professionals
  • Young people’s substance misuse workers
  • Learning mentors and school support staff
  • School nurses
  • Home visitors, volunteers and mentors
  • Statutory and voluntary homelessness agencies

The field of potential failure, for registering ‘need’, for ‘knowability’ is then extensive and potentially all-encompassing.

Epistemic Technologies

As Oleson (2004) defines the term, epistemic technologies are technics which make theories moveable in such a way that results obtained in any domain to which the technology can be applied can be certified without recourse to the original theoretical assumptions. Described in this way, epistemic technology sounds rather close to the well established notion of ‘black-boxing’ (see Latour & Woolgar, 1986 amongst others). That is, where a set of assumptions and decisions become embedded in a self-contained technical matrix in such a way that the cost of unpacking those assumptions becomes prohibitive so long as the overall technical package is deemed to ‘work’.

I want to suggest some other characteristics which might render epistemic technology as a subtly distinct analytic term. Take the notion of ‘Key stage’ examinations, which pupils in the UK take at regular intervals during their school ‘career’. These standardised examinations, which cover the entire curriculum, but concentrate in particular on ‘key skills’ do not contribute to final leaving qualifications. Their relevance is strictly in terms of performance monitoring. However Every Child Matter hails improvement in key stage exams as ‘marker’ in addressing children’s needs. Key stage exams are certainly black boxes – if by that what is meant is a set of technics which embed certain assumptions about normative development and enable the evaluation of a given child. But the kinds of assumptions they embed are not naturalistic, in the sense of being claims about the proper order of things or the ‘true’ capacities of children. They are instead a mixture of aspirations – what we want children to be able to do, moral imperatives – what we think children ought to be doing for their and our benefit, and political-economic rationalisations – what kind of curriculum is teacheable, assessable and mangeable given existing resource constraints.

In this sense I would see Key Stage examinations as ‘open’ black boxes. It does not require too much effort to contest the assumptions which are embedded within them (and indeed these assumptions are routinely subjected to this kind of debate, not least by teachers and parents). Arguably what keeps Key Stage examinations together, as a piece of technics, is the moral force which it acquires when the pupil evaluation becomes seen as a way of ensuring that children ‘reach their potential’. Moreover, that the generalised application of Key Stage exams becomes a way of reinforcing the notion of a ‘universal child’ – the formal, abstract set of developmental criterion. However, since this universal child is an abstraction, the progress that is measured can only be in some sense ‘aspirational’. Indeed the Key Stage tests offer results that are comparative against average benchmarks.

If Key Stage tests are good exemplars then we may say that epistemic technologies have the following properties. They are open black boxes. They embed moral as well as technical assumptions. They formalise aspirations. They produce results that can only be judged against abstract measures (possible normative statistical population criterion). They are inherently universalistic. That is, they promote and technically realise a idealised version of the object they purport to measure. And, finally, establish generalised equivalence within a given epistemic field – such as educational attainment.

Universal services

Every Child Matters is haunted by the spectre of the false negative. The child who was denied the services and interventions they desperately needed. The child who was ‘known to’ welfare providers, but whose case was closed, or who otherwise failed to properly register. The child who was let down, who slipped through the net. In response to this the document establishes a universal model of the child as an epistemic object embedded within a range of universal services. What stands between the object and services is a web of potential failures, of deviations from an abstract normative trajectory.

For this assemblage to function coherently, it is then necessary to 1) establish the child as a series of universalised aspirations; 2) build these aspirations into a range of epistemic technologies which render any given child as a deviation from the abstract ideal; 3) co-ordinate the applications of these technologies in such a way that they produce a composite set of readings and 4) translate backwards from the normative to render the set of deviations of an ‘at risk’ child as particular case. In other words, Every Child Matters demands an assemblage that is able to move fluidly from the particular to the universal and back to the particular.

To get from the universal to the particular appears to be relatively unproblematic – at least in principle. Every Child Matters maps out two distinct terrains where this can be achieved. First, the physical plane of geographic mobility. Schools, for instance, will become ‘extended schools – acting as the hub for services for children, families and other members of the community’ (p.29). Since children are legally obliged to attend school, these become obvious sites for the co-ordination of other services. In classic Actor-Network Theory terms (cf. Callon, 1986) we might call schools well entrenched ‘obligatory points of passage’. It is around these points of passage that services may be usefully co-located. Or as the document puts its:

Embedding targeted services within universal settings can ensure more rapid support without the delay of formal referral, and enable frontline professionals to seek help and advice. Developing networks across universal and specialist professionals can strengthen relationships and trust. (p.63)

Here we see the moral force of epistemic technology. It is assumed that since the object of these services is a universal child, who is moreover always ‘in need’ that the problem is therefore one of co-ordinating those services with one another. ‘Formal referral’ is then constructed as an obstacle to be overcome rather than a negotiation with the potential service user. Tellingly, the ‘relationships and trust’ which require strengthening are between professionals, rather than with service users themselves, who have already been defined as ‘in need’ of the services on offer by way of the web of failure spun out from the application of multiple epistemic technologies.

Second, there is a ‘virtual’ plane of data co-ordination which is broadly related to the first. This takes the form of ‘information hubs’ such as the following:

An information hub is then a combined case record for every child in a given local authority. To this end every child will be given a standardised numeric coding on birth. The case notes associated will cover basic details along with ‘flags’ and ‘links’ to specialist information collected by different professional groups (e.g. social services, Youth Offending Teams).

The barriers to the establishing of these hubs are mostly technical:

Such information systems will be based on national data standards to enable the exchange of information between local authorities and partner agencies, and capable of interaction with other data sets. (p.53)

It is worth dwelling a little on this statement. The ‘national data standards’ to which it refers could be legislative, technical standards or software protocols. In any case it is not at all clear that they exist in a form which could support the ‘rolling out’ (as it is known) of such hubs. The hubs must further be capable of ‘interaction with other data sets’. Since he diagram depicts at least 13 other ‘nodes’, we must assume that this means interaction with the range of data sets – past, present and under development – held within each node. The hub is then not merely a point of co-ordination, it is a universal exchanger of information that is able to recover all of the past and anticipate all likely future developments. It will be the ultimate gold standard around which all welfare service data will be managed.

The dream of seamless forwards and backwards compatibility which is implied seems unrealisable. As Bowker & Star’s (1999) instructive analysis of the International Classification of Diseases displays, successive iterations of a classificatory scheme such as a database tend to make it difficult to recover the content of previous categories in such a way that given cases can be effectively recoded. Or put simply, databases suffer from bad memory as they pass through successive iterations. The introduction of a further hub through which these existing evolving databases would then be obliged to communicate would then run the risk of subjecting the individual databases to a process of what Bowker & Star call ‘clearance’ – wholesale category revision rendering the past inflexible to future re-classification and analysis.