UACE Response to the RAE Consultation

The Association welcomes the opportunity to contribute to the sector’s thinking on research assessment and has organised this response through the chair and membership of the UACE Research Sub-Committee.

Our aim is to highlight those issues of particular relevance to Continuing Education research, which was represented in the last exercise through the membership of a sub-panel, reporting to the main Education panel. However a number of the points made below are also of significance sector-wide.

Quality in research and how judgements of quality are made

1.

We welcome the invitation to contribute to thinking about what is meant by quality in research. Our view is that quality cannot be measured against a single dimension of intellectual excellence. While the 2001 exercise did not ignore research impact, it may well have been viewed as an additional consideration, unrelated to the kind of research undertaken. If we accept that research quality ought to be judged in terms of characteristics such as its originality, advancement of knowledge, methodological strength and scholarly rigour, the question remains of what counts as evidence of each of these qualities. If we bring the issue of ‘fitness for purpose’ into the framework, we might review all these core elements of research quality in relation to the purpose and impact of the research concerned, not simply in relation to some abstract definition of what counts as quality in each of these areas.

2.

Quality should also be measured in terms of collaborative approaches, and the relationships that are developed between researchers and users of the research. Continuing education researchers often work closely with practitioners, and research feeds back directly into practice. Practitioners, research users and on occasion, the subjects of research, become involved in research processes that are beneficial for the quality of their work and learning. These features of continuing education research should be recognised as indicators of quality, and given a higher valuation than they were perceived to have in the last exercise. The ESRC/TRLP III valuation of capacity building in research, is both welcome and an approach that the RAE should learn from.

3.

We would therefore welcome greater emphasis on collaboration and impact on practice, as important measures of quality in the next RAE. In some cases, the practice benefiting from the research is the work of professionals in areas such as nursing, guidance, training and related contexts. There is also for many Continuing Education researchers, an obvious and positive impact on teaching quality, arising from their research, whether in the Further Education, Adult or HE sector. Public sector funding of research should surely take a positive stance with regard to evidence of impact on teaching quality as a direct result of the research submitted. We ought to encourage researchers by including a criterion for quality that specifically recognises positive impact on teaching quality, through collaboration and research capacity building among the community that uses the research. Some in these communities also benefit from developing their own research capacity, where they are moving towards a research career from a base in practice.

4.

Our primary concern is that the assumptions we make about what constitutes quality in research, should recognise the different and valid traditions across the range of disciplinary approaches to research. Continuing Education brings together both disciplinary and interdisciplinary research. Within the field, there are researchers who use either quantitative or qualitative methods, and a larger number prepared to use elements of both, On the whole the approach is most often that of applied research, drawing on the disciplines of the social sciences and the arts.

5.

While some researchers may not need large project funding, many work within departments that are continually engaged in trying to bring in external funding. This is expensive of staff time, and often disappointing in terms of grant achievement. The competition for ESRC funding for example is fierce, with many alpha rated projects not being funded, year on year. Inevitably, quantitative measures based on the scale of external funds year on year, will always fit the research context of the sciences rather better than it does those of Continuing Education.

6.

While we accept the limited use of quantitative indicators, we strongly support the need for continued peer assessment, based on judgements about the intrinsic value of the research undertaken. Such judgements of quality should, in our view, give value to applied research and to research, which generates knowledge in practice contexts, as well as to research that generates discipline-based knowledge. Gibbon’s mode 2 knowledge generation process still seems to be accorded a less high status than mode 1, and this is regrettable given the closer link between practice improvement and mode 2.

7.

Taking these considerations of impact on teaching further, the consultation might take teaching innovation and development into account rather than exclude it from consideration, as appears to be the case from the consultation document. In this context we recommend that the review might look again at the Dearing Report’s recommendation that a proportion of research funding in future should be allocated to institutions which specialise in innovative teaching and development, particularly in the Widening Participation area. The rationale for this would be that to ensure excellence in teaching quality it is essential to provide resource for scholarly research directly supporting that teaching. This point is relevant to the whole sector, and therefore an issue for all units of assessment, not only Continuing Education.

8.

Summing up our stance with regard to the two issues of expert review and algorithm approaches:

a)We strongly support continued use of expert review, involving peers and research users in combination. We view the inputs of the Continuing Education Sub-Panel as essential to retain in any future exercise

b)Assessment using quantitative indicators and algorithmic methods alone is not acceptable, though quantitative indicators are an important balancing feature, and do enable broad comparisons across the units being assessed, which can help as part of a wider process

c)We favour assessment based on retrospective judgement, including evidence of a strong research community and training for new researchers as part of the judgement. In that sense, retrospective review provides evidence of the likely positive future performance of the unit, and we see no other feasible way of providing judgements of prospective research quality

The frequency and process of research assessment

9.

We strongly support efforts to produce a research assessment system that combines transparency, fairness to individuals and institutions, with processes which are less rather than more burdensome. The need to protect research time for doing research rather than presenting it for assessment, leads for example to our view that self-assessment would be a retrograde step. The burden of a system based wholly on self-assessment would be greater than the existing system. The likelihood of the outcome achieving credibility also seems remote.

10.

There is already a degree of self-evaluation built into the RAE documentation through the RA5 narrative. The value of such self-review is of course its encouragement to reflect on strengths and weaknesses, with a view to developing change and improvement in the future. However, if self-assessment were directly linked to resourcing, huge efforts would shift into the self-assessment process, away from research itself. There is room for game playing in any system, even where the process puts in place independent measures of research quality. The evidence of its taking place in every exercise including the last one does not provide a positive starting point for moving towards greater emphasis on self-assessment.

11.

What might be considered is the ordering of the sections of the Unit submission. The RA5 and 6 statements might precede the quantitative indicators, with a view to providing the context within which the indicators are read and interpreted. The current sequencing may prioritise the quantitative, and give unintended signals about the lesser importance of the qualitative commentary.

12.

We have a concern that the present process does not do enough to foster the improvement of those receiving ratings lower than a 4, and more could be done to provide in-depth feedback that would help units where there is great promise and valuable research being achieved on relatively modest funding. Indeed we would welcome greater attention to value for money in reaching quality judgements. We should surely weigh the scale of the funding received over the period reviewed, as well as the quantity and quality of the research outcomes, rather than judge those outcomes with no regard for whether they represent a good or reasonable return on funding, by comparison with other units submitting.

13.

The frequency of assessment might be changed, with benefit in terms of the administrative burden. There might be a rolling programme of reviews of grouped areas, instead of the whole system review as at present. Whether or not this would indeed reduce the burden on administrative support centrally within universities would need exploration. However, a cyclic approach would spread the load on support services and allow different sections to specialise; those doing the RAE would have a continuing function, while others could focus on other aspects in support of overall activity. This avoids the diversionary peaking of activity in the current system and may enable greater efficiency in use of resources. The impact of funding decisions on institutional resources overall would also be staggered over several years rather than concentrated in one as now, and that might be beneficial at institutional level.

14.

The consultation document raises the issue of subjects being grouped for assessment purposes. There might be the benefit of greater support for interdisciplinarity, if CE research were to be grouped with a cluster of cognate disciplines. We would certainly wish to see CE grouped within a broadly Social Science-related cluster. The attendant risks are that one particular view of research quality comes to dominate quality judgements, with even less accommodation of different research paradigms and emphases the result.

15.

In summary therefore, there are improvements that can made to the present system, notwithstanding that any system will fall short of perfection and have some vulnerabilities. We welcome the opportunity to contribute to HEFCE’s thinking, and hope to take forward the debate through the various meetings and fora planned for the Consultation.

Members of the UACE Research Sub-Committee

Chair: Professor Mary Thorpe

The Institute of Educational Technology

The Open University

Email: