Response to the joint funding bodies’ review of research assessment

School of Psychology, Birkbeck College

Priorities:

We feel that the next RAE should be (among other things):

  • Fair to individuals and institutions
  • Transparent
  • Administratively efficient

Approaches to assessment:

We are against two of the four approaches:

Self-assessment:

  • is bound to provoke a large amount of hard-to-evaluate narrative
  • will presumably lead to differing non-commensurable criteria being applied by different institutions
  • evaluating only a subset of all assessments seems unfair

Historical ratings:

  • Appear inherently conservative and are bound to cement the status quo
  • Does not encourage institutions to ‘fast track’ research quality improvements

We outline here two approaches we think worthy of consideration:

1. Expert panels:

This would run pretty much as in the last exercise. However we would argue panels for each discipline should be sufficiently broad to encompass mainstream as well as other approaches.

2. Algorithms:

The RAE process itself could be based exclusively on specific algorithms, as devised by expert panels for each discipline (see below). Input should consist of quantifiable and externally verifiable data, and the research assessment outcome should be derived in a transparent way. Criteria used, weighting factors etc. will be determined via the interaction between expert panels and disciplines (see below), and all criteria and procedures will be made fully publicly available prior to the assessment. This approach would maximize transparency and efficiency of the evaluation process proper, since it would principally just consist of the application of agreed algorithms. The focus of the RAE process would therefore be shifted from the assessment phase to the phase where assessment criteria are agreed (see below).

In this model the expert panels will be mainly responsible for devising appropriate algorithms on which all evaluation is based. Panels convene before the RAE proper, and should also include professional programme evaluation experts. Specific criteria and algorithms suggested by the panels should be made public, and be developed and/or changes in interaction with the disciplines to be evaluated. Panels for each discipline should be sufficiently broad to encompass mainstream as well as other approaches, and different sets of criteria should be applied to different sub-disciplines (f.e., experimental vs. qualitative research) when appropriate. Discussing and agreeing on these more pluralistic and discriminating criteria, and on the weighting to be given to different elements contributing to the evaluation, is likely to be a major aspect of the work of the expert panel and of its interactions with stakeholders.

Other issues:

  • In the interest of transparency, fairness, and to prevent ‘games playing’, we suggest that all academic staff members should generally required to be entered in the next RAE.
  • Serious consideration should be given to making the unit of analysis research groups rather than individuals. This would probably have the effect of reducing the tendency to engage in a ‘transfer market’ of research active staff and is also likely to be a more stable measure of the continuing research performance of a Department. (This was a controversial point in our own discussion - obvious drawbacks are that collaborations across departments and universities may be discouraged, and that not all individual members of a Department will necessarily be members of a genuine research group.)
  • Criteria set for the award of specific ratings, including cut-off points separating ratings, should be fully and comprehensively made public in advance, especially if an expert panel approach similar to the previous exercise is adopted.

Issues specific to Psychology:

Psychology is a particularly pluralistic academic discipline, with one dominant epistemological and methodological base (positivism and experimental methods) but with many others having significant inputs. In addition, the range of reputable work in psychology is very varied, from ‘classical’ experimental studies to clinical case investigations, ethnographic and other qualitative work, and including major theoretical studies. Outcomes are not restricted to peer-refereed journals, so it is important that measurement criteria enable adequate evaluations (and weighting for) high quality books. If the assessment process focuses more heavily on a previously-agreed algorithm, as suggested above, this issue needs addressing.

Even if a primarily expert panel based system is maintained, more representatives from outside the experimental tradition need to be included on the psychology panel, and more discriminating evaluation of a wider range of outputs is necessary. We think it would be a good idea for the psychology panel to be opened up to include 2 or 3 members representing alternative research perspectives, e.g. qualitative, counselling.