Review of Research Assessment Leeds Metropolitan University

Review of Research Assessment Leeds Metropolitan University

REVIEW OF RESEARCH ASSESSMENT – LEEDS METROPOLITAN UNIVERSITY
Group 1: Expert Review

The University is in agreement with the proposed use of expert review provided that panels are representative of all the types of institutions that are participating in the exercise and that one of the criteria they use for evaluation is quantitative and qualitative improvement in an institution's submission.

aAssessment

An assessment which reflects both past achievements, and current and future development plans would help to ensure that funds are allocated to units which are developing and continuing to make progress. It would also ensure that enterprising institutions could invest in developing their research profiles in the expectation of some financial support.

bData to be considered

Assessors should consider much the same objective data as in the current scheme, i.e., outputs in the form of publications, patents etc; externally generated research funding and research awards completions. Thought should be given as to the weighting of established academic journals versus that of professional journals and new media such as e-journals, which might make research results available to a wider audience.

cLevel of assessment

Assessment should be made at no higher level than the broad, but coherent, subject areas, used in the present system. Making the assessment across the whole of an institution or indeed a significant part of an institution mitigates against universities which are not primarily research led, but have focused on attaining research excellence in very specific areas. Some consideration should be given to the possibility of an institution submitting single individuals or very small groups in areas where it can be demonstrated that excellence in research is not dependent on critical mass.

dOrganisation of assessment

It is difficult to see any alternative to organising assessment around subjects or thematic areas despite the difficulties this gives for some multidisciplinary research. The current breakdown appears to offer the right degree of granularity although there is some scope for rationalisation and reduction of number of categories, e.g. in engineering. Retaining (broadly) the current set of subjects has the advantage of allowing comparison over time. It also recognises that planning and investment will have taken place based on these subject areas.

eStrengths

The use of subject experts gives confidence of the panels' awareness of relative significance of strengths within submissions. Any other type of panel would need to call in expert advice in order to ensure confidence within the community that submissions are being assessed by peers.

Weaknesses

The perception that panel members may have bias towards particular subject / sector areas and the possible difficulty of introducing new thinking may be associated with this approach.

Group 2: Algorithm

aShould research be based entirely on metrics?

The University has concerns with using metrics alone to assess research. It would be difficult to assess non-tangible benefits if research assessment was reduced to numerical measures of outputs.

bAvailable metrics as indicated above

Metrics should include external research income, publications in academic journals, a wider set of publications, patents, research student numbers (including overseas research students) and completions. Citations could also prove useful in identifying seminal work provided some precautions were taken to eliminate self-citation or mutual citation within a small group.

cAvailable metrics to research strength

It is unlikely that a single formula could be provided to relate the available metrics to research strength. The metrics quoted above have different significance in different areas of research and, even within a single area, there are likely to be different profiles. A good example of this is in some areas of science and technology where one or two groups, who are doing work which is highly regarded by their peers, show low scores on some metrics around publication because of commercial confidentiality issues.

dEffect on behaviour

Inevitably when funding is tied to any means of assessment this has an impact on behaviour and over time would reduce the reliability of the metrics.

eStrengths

The strength of an approach purely tied to quantitative measures is its obvious transparency.

Weaknesses

The weaknesses, as indicated above, are that there is no way of assessing non-tangible benefits and behaviour is likely to be distorted.

Group 3: Self-assessment

The University sees this as a positive approach with a lot to commend it. Institutions would be able to highlight their own strengths and would have to justify their assessment.

aData

This should include the metrics discussed above.

bScope of assessment

Assessments should combine retrospective and prospective analyses.

cCriteria

Self-assessment should be set within the context of an institution’s own research strategy and demonstrate ability to reflect on past achievements and use them as a basis for future development. A key criterion must be that the work (past, present, and future) should contribute some additional value to the subject areas. This can be addressed via considerations of benefit to society, returns on investment and students’ success.

A rolling assessment system where groups could choose when they were ready for assessment (within constraints of some minimum and maximum time frames) would be advantageous. This would reduce the short-termism and would allow newer research groups to be assessed when they were ready and not at an arbitrary point in time. Measures of progress through an assessment period and of "added value" would be beneficial.

dValidation

This assessment system could be validated by audit/referees and potentially it offers more control and flexibility.

eBurden of assessment

There is potentially a greater burden, but this could be minimised by ensuring that internal processes followed the same pattern so that the self assessment was produced automatically as part of the institution’s own monitoring and evaluation cycle.

fStrengths

Such a system would be more flexible and, if properly structured, less burdensome to individual institutions. Overall there would be a considerable saving compared with the present data gathering exercise.

Weaknesses

If self-assessment also involved moving to the rolling assessment described above and to more flexibility in submission there may be issues of consistency.

Group 4: Historical ratings

aAcceptability

A totally historic approach is not acceptable as in practice distribution of research strengths can vary very quickly even in apparently well established departments. There are many examples of a senior academic figure leaving a one institution for another and taking with him/her a substantial part of his/her research group. New areas of enquiry are also subject to more rapid change.

bMeasures

If a system partially based on historic data was to be used, a baseline could be established from data from the last three research assessment exercises (i.e., those which involved the whole span of University activity rather than just the pre-1992 institutions). If this approach were to be taken the key is to look at the trend as well as the absolute value of an area – thus a unit of assessment which had moved to a 2 in 1992, to 3b in 1996, to a 3a in 2001, might be considered to be showing more potential than one that had remained at 3 throughout, or indeed one which had moved from a 4 to a 3.

cIdentifying failure or exceptional performance

Comparing data from the annual research activity survey across a number of years could be used to identify institutions which are failing or those outperforming expectations. Again, where amount of research activity is small, it is essential that such an analysis be made by looking at the trend across a number of years rather than at a single set of figures. Comparing some of the output measures from the research activity survey with the input in terms of research funding from the funding council could provide an indication of value for money although this would need to be done on a subject by subject basis.

dEffect upon behaviour

This approach offers the opportunity for long-term planning and development to take place in a relatively stable environment. However, if there is no clear way of identifying improvement or its opposite in an institution, there is a danger that the approach leads to stagnation with the well funded growing complacent and the less well funded demoralised with negative impact on research as a whole.

eMajor strengths

Big decrease in the amount of bureaucracy associated with the exercise. This approach also offers the opportunity for long-term planning and development to take place in a relatively stable environment.

Weaknesses

If the approach is purely historic it mitigates against those working to improve the level of their research or to create research strengths in emerging areas.

Group 5: Crosscutting themes

aWhat should research assessment be used for and by whom?

We feel that the results of the current exercise are used in the provision of management information.

bHow often should research be assessed?

We believe that some flexibility in timing of submission should be allowed and negotiated with the institutions themselves. There is no compelling reason for all subjects to be assessed simultaneously. Some control over frequency might be retained by the Funding Councils by using “time since last submission” as a form of confidence interval.

cWhat is excellence in research

We believe that there are different aspects of research activity, that each demand recognition and that excellence can be defined in a number of different ways including innovation, creativity, applicability…

dProportion of available funding per subject

We believe the assessment exercise should play some part in determining the proportion of available funding directed towards each subject. In particular a strategic judgement should be made on the importance of an area to the UK and the volume of research in the subject that meets a given quality threshold.

eShould institutions be assessed in the same way?

We support the notion that, within a set of broadly similar subject areas, assessment methods should be as similar as possible.

gInstitutional discretion

We believe that the advantage of the current system in which institutions have a large degree of control over their content of their submissions is essential in making it possible for them to reflect diversity of mission and allow the nature of the institution’s research to be appropriately assessed.

hSupport for equality for all groups of HE staff?

Concerns have been expressed with respect to gender equality issues. These were at least partly addressed in RAE 2001 by allowing special circumstances to be described.

iImportant features of an assessment process?

It should be rigorous, fair, efficient and transparent.