Improving Student Assessment and Feedback in Biosciences (ISAFB)

Mike O’Driscoll

Dr Pete Sanders

Faculty of Health and Medical Sciences

Supported by the Fund for the Strategic Development of Learning and Teaching at the University of Surrey

Table of contents Page

1. Introduction

1.1The importance of assessment and feedback in learning...... ……..

1.2Terminology regarding programmes used in this report

1.3Satisfaction with assessment and feedback across all biosciences and related programmes at the University of Surrey; National Student Survey (2007/8) and Student Course Experience Questionnaire (2008)

1.4Satisfaction with assessment and feedback within Biosciences and related programmes at the University of Surrey; NSS (2007/8)

2.ISAFB project

2.1Methodology

2.2Findings from ISAFB project - student and staff perspectives of assessment and feedback

2.2.1Formative assessment

2.2.2Summative assessment

2.2.3Marking criteria being made clear in advance

2.2.4Peer assessment / review

2.2.5Awareness of assessment and feedback in relation to learning style

2.2.6Feedback - low satisfaction with timeliness and level of detail

2.2.7Standardised feedback forms

2.2.8Online assessment and feedback

3.Conclusion

3.1Assessment

3.2Feedback

4.Recommendations for improving the student experience of assessment and feedback in biosciences

References / Bibliography

1. Introduction

1.1The importance of assessment and feedback in learning

There is considerable evidence from the literature that assessment and feedback are amongst the most important contributory factors to an effective and enjoyable learning experience for students but there seems to be relatively little evidence about how teachers understand or manage the relationship between assessment and learning.

The central importance of assessment in understanding how students (including those in higher education) learn has been recognised for some time. (Snyder,1971; Miller & Parlett, 1974). Miller & Parlett examined the sensitivity of students to ‘cues’ about the assessment system and constructed a student typology which identified ‘cue seekers’, who proactively sought to use lectures to predict exam content; the ‘cue conscious’, who were less proactive but nonetheless sensitive to implicit or explicit tips given from teaching staff about what was likely to be in exams or what would be given most value in exams, and the ‘cue deaf’, who were unable or unwilling to ‘hear’ useful information about assessment.

Thomas and Bain (1984) found that the most influential feature of the learning environment was the nature of the assessment procedures. The results of their study showed clearly how a change from multiple-choice to essay-based examinations had shifted the overall tendency of the students from a surface approach towards a deep approach. Entwistle & Entwistle (1991) confirmed that multiple-choice formats, or an emphasis on detailed factual answers, pushed students towards a surface approach, while open, essay-type questions encouraged a deep approach. The deep approach has been associated with quality learning outcomes and better grades, while the surface approach has been related to low quality learning outcomes (Ramsden, 2003).

Marton and Säljö (1997) also found that the form of assessment expected by students had a profound influence on their learning styles and their research also demonstrated that inducing a deep approach can be challenging and may inadvertently result in the induction of a surface approach unless carefully designed. Eizenberg (1988) stressed that any component within the learning environment which contradicts the direction of influence of the other components might prevent the intended effect from being achieved. This implies that maximizing effectiveness of teaching depends on conveying a consistent message to students regarding what type of content or structure will be rewarded in assignments and examinations (Entwistle & Tait, 1995). More recent qualitative studies (Sambell et al. 1997) have described students’ negative perceptions of traditional assessment procedures (such as exams) and more positive attitudes to innovative assessment.

Gibbs and Simpson (2005) outline the key conditions under which assessment can be seen as supporting student learning. These include assessment that captures the time and effort put in by students; assessment that engages the student in productive and appropriate learning activity (such as deep learning) and constructive, timely and detailed feedback which is acted upon by the students. Gibbs and Simpson give examples from their research where students drew a distinction between learning that gives them a good understanding of the subject and learning which is orientated to succeeding in assessment; students commonly feel that when under pressure they must adopt the latter type of learning.

The Formative Assessment in Science Teaching (FAST)project (OpenUniversity and Sheffield Hallam University 2008) co-ordinated a series of interventions in higher education science teaching settings to apply the ‘principles’ outlined by Gibbs and Simpson (2005). This project has generated a wealth of case studies and a library of resources related to assessment and feedback but it seems that the material produced is not yet available as a single report or paper.

Struyven et al. (2002), from a constructivist perspective on learning, also argue that students’ perceptions of assessment have considerable influence on their approach to learning. Students’ preference for particular forms of assessment reflects their preferred learning style e.g. ‘deep’ learners tend to prefer forms of assessment which will allow them to demonstrate such learning (Entwistle and Tait 1990).

With regard to feedback, Gibbs and Simpson (2005) suggest that this is most effective when it is frequent; timely; sufficient; detailed; linked to the purpose of the assessment task and criteria; appropriate to the student’s level of understanding and focused on learning rather than marks by relating explicitly to future work and tasks. Evidence from the FAST project (Glover and Brown 2006) found that tutors believed that they were providing plenty of good quality feedback and students perceived that they did pay attention to this feedback but were often not able to use it because it was untimely, insufficient in length or detail. The most significant barrier to students’ use of feedback seemed to be that the feedback was not relevant i.e. it was topic-based and did not help them to improve their performance on future topics. Glover and Brown (2006) distinguish this kind of unhelpful feedback from what they term ‘feed forward’; the latter facilitates future learning and in that sense may be used by the student in a formative way.

1.2Terminology regarding programmes used in this report

Biosciences = All programmes

Nutrition = Nutrition, Nutrition / Dietetics, Food Science* & Nutrition programmes

Biological Sciences = Biochemistry, Microbiology (inc Food Science * variant), Biomedical Science programmes

*Food Science programmes are joint degrees and students are split in the National Student Survey (NSS) according to the associated programme i.e. Nutrition or Microbiology. The majority of Food Science students are included in Nutrition.

1.3Satisfaction with assessment and feedback across all biosciences and related programmes at the University of Surrey; National Student Survey (2007/8) and Student Course Experience Questionnaire (2008)

A majority are satisfied with their programme overall, and with the fairness of assessment but a minority are satisfied with feedback. Improving satisfaction with feedback could therefore be seen as a priority. / Chart 1: Percentage satisfaction with Assessment and Feedback across all Biosciences programmes at the University of Surrey 2006/7 and 2007/8.
  • (Q.6) Evidence from the National Student Survey (NSS 2007/8) suggests that student satisfaction with assessment in biosciences is fairly high in absolute terms with just under two-thirds being very or fairly satisfied that ‘assessment and marking arrangements have been fair’.
  • (Q.7 + Q.8) Satisfaction with feedback is much lower than satisfaction with assessment within biosciences; just over a third of respondents (37%) agreed that ‘I have received detailed comments on my work’ and just 42% agreed that ‘feedback on my work has been prompt’.
  • (Q.9)A similarly low proportion (40%) agreed that ‘feedback on my work helped me clarify things I did not understand’.
  • (all questions) There is little change from 2006/7 to 2007/8 in satisfaction with assessment and feedback in biosciences.
  • (all questions) Satisfaction with most aspects of feedback and assessment amongst biosciences students is considerably below the University of Surrey average (but overall satisfaction amongst biosciences students is extremely high at 91% – see Chart 2)with Biological Sciences at the University of Surrey ranked second nationally in terms of satisfaction in the 2007/8 NSS(not shown on chart).
/

The SCEQ (2008), although based on a fairly small sample, and restricted to level 2 undergraduate students and those on taught postgraduate programmes (whose responses are aggregated in reporting) indicate Table 1 (below) that students feel that memory and factual data are emphasised at the expense of understanding. For example, mean agreement in biosciences programmes that ‘the emphasis in assessment is on memorising rather than understanding’ is fairly high but is below (i.e. is better than) the University of Surrey and FHMS averages. The SCEQ (2008) data also shows that, within FHMS, nearly half of students consider that the only feedback they receive is in the form of marks or grades. The data per programme on this question is not given in the SCEQ report.

Table 1: SCEQ (2008) Mean scores on appropriate assessment scale – biosciences and related programmes against faculty and university averages (scale runs from 1 ‘strongly disagree’ to 5 ‘strongly agree’). Adapted from SCEQ (2008: 44-46); table 15 and table 16.

Mean score on these three items: (Appropriate Assessment Scale). N.B. because these questions are negatively phrased, lower scores are ‘better’. / Biomedical science / Biochemistry (MSc programme only) / Microbiology / Nutrition / FHMS average / University of Surrey average
  • This programme involves more assessment of what I have memorised than what I have understood
  • Too many teachers asks me questions just about the facts
  • To do well in this programme all you really need is a good memory
/ 3.04 / 3.00 / 3.11 / 3.15 / 3.36 / 3.31

Taking the NSS (2007/8) and SCEQ (2008) together it would be reasonable to conclude that satisfaction with assessment and feedback amongst students of biosciences and related programmes is at or below the university average, depending on the particular measure of assessment used.

There are notable differences in satisfaction levels by programme within the biosciences ‘group’ of programmes according to the NSS (2007/8) data with nutrition tending to have lower satisfaction with assessment and feedback and biological sciences tending to have the highest satisfaction. Interestingly, it would seem that the wide variations in satisfaction with assessment and feedback within biosciences programmes is not reflected in the SCEQ (2008) data.

Interestingly, the relatively low satisfaction with feedback and some aspects of assessment in biosciences and related programmes appears to have had a limited impact on overall satisfaction with the programme, which is extremely high (91% in NSS 2007/8).

1.4Satisfaction with assessment and feedback within biosciences and related programmes at the University of Surrey; NSS (2007/8)

NSS 2007/8 data shows that, as in 2006/7, overall satisfaction amongst students of biosciences and related programmes is extremely high (and above the university average) as is satisfaction with assessment but satisfaction with feedback is moderate to low (and often below the university average).

Chart 2: Percentage satisfaction with assessment and feedback within biosciences and related programmes at the University of Surrey, 2007/8. N.B. Overall average satisfaction on assessment and feedback differs slightly from that in Chart 1 (see note at end of section 2.1).
  • Overall student satisfaction with biosciences and related programmes is extremely high, ranging from 100% in microbiology to 76% in nutrition
  • Average student satisfaction across all biosciences and related programmes is 91% but the equivalent figure for satisfaction with assessment and feedback is less than half this figure at just 49%.
  • Satisfaction with assessment and feedback is relatively low and in may cases is considerably lower than university average.
  • There is significant variation within the biosciences and related programmes with regard to satisfaction with assessment and feedback. Typically, biological sciencesachieve higher satisfaction on assessment and feedback, followed by microbiology with nutrition some way behind.
  • For nutrition students, satisfaction with all aspects of assessment and feedback is low ranging from just 20% satisfied with promptness of feedback to 44% satisfied with the fairness of assessment.
/

2.ISAFB project

While the data from the NSS and the SCEQ are extremely useful such quantitative survey data does not constitute an adequate explanation of why students are satisfied or dissatisfied with various aspects of assessment and feedback. SBMS teaching staff, now integrated into the new Faculty of Health and Medical Sciences (FHMS), have attempted to strengthen feedback and assessment. Innovations have included increasing the time spent in providing feedback on both formative, and more importantly, marked summative assessments, such as end of module examinations which were previously unseen by the student. The measures taken to improve assessment and feedback vary from programme to programme and range from large group tutorials to review students’ performances in module examinations, to one-to-one sessions reviewing an individual examination paper. Teaching staff in SBMS also introduced elements of peer review and peer assessment with some online elements. It was hoped that this would provide information to aid students in understanding how marks were gained (or lost) in a more efficient and timely way which would help in furthering students’ knowledge of both the subject and the assessment criteria being used to measure individual achievement.

The ISAFB project aimed to provide qualitative data to understand whether innovations in assessment and feedback had effected the student experience but also to facilitate interpretation of the NSS data and more generally to understand the student experience of assessment and feedback. The ISAFB project aimed to assess staff and student perspectives on assessment and feedback on biosciences programmes at the University of Surrey and to see what learning might be derived from these with a view to suggesting (albeit tentatively) some recommendations for improving the student experience of assessment and feedback in biosciences and related programmes. This pilot / exploratory project helped in the identification of areas for further research/consultation and suggests research questions which more systematic research might investigate at a later date.

2.1Methodology

All students on biomedical sciences; microbiology; biochemistry; food sciences, nutrition, and dietetics undergraduate programmes were invited to attend a focus group at a campus location. The focus groups were organised by programme subject area. A total of eight students attended three focus groups.

As many staff teach on more than one of the programme areas listed above, it was felt that two focus groups would be appropriate; one being those staff who mainly teach on biomedical sciences, microbiology, biochemistry and the other being with staff who mainly taught on nutrition, food sciences, or dietetics. A list of staff supplied by the Director of Undergraduate Studies was used to invite people to attend focus groups on campus. A total of four staff attended two focus groups.

The focus groups were transcribed and analysed thematically using Atlas T.I. software.

Please note that NSS findings regarding biosciences and related programmes are prepared according to a variety of definitions so that small internal inconsistencies may be apparent withinthe data.

2.2Findings from ISAFB project - student and staff perspectives of assessment and feedback

Findings from the project are summarised below.

2.2.1Formative assessment

A working definition of formative assessment which was used in the focus groups was ‘mock exams or coursework which are marked but which do not count towards the final degree mark’.

Evidence from staff focus groups suggested limited use of formative assessment but some examples were described. These included a Level 2 virology module with assessment by means of three multiple-choice questionnaires, the first of which did not count towards the module mark. In this case, formative assessment was carried out by self-assessment with students marking the first of the three questionnaires themselves, The students who had completed the assessment were recorded (but not the actual marks so as to encourage participation in the assessment). This seemed to have been useful for the students although one did comment that she did not se the point of doing the work if it did not count. Staff had ‘managed’ the student resistance to formative assessment in some instances through blurring the distinction between formative and summative assessment (i.e. giving very light ‘loading’ to a summatively assessed piece of work specifically to give students the opportunity to learn from the summative assessment without dropping too many marks if they ‘failed’).

In summary, it could be said that the use of formative assessment in biosciences was limited partly because of student resistance (which in turn was partly because of an assessment workload which both students and staff perceived as being high) and also perhaps because of a lack of staff time to plan and carry out formative assessment. It also seemed that there was perhaps a lack of confidence in using formative assessment and a lack of clarity in some cases about what the purpose of formative assessment was (e.g. for predicting performance in summative assessment or for just encouraging learning in a general sense).

2.2.2Summative assessment

The working definition of summative assessment which was used in the focus groups was ‘assessment such as exams, coursework, or dissertation which is marked and which counts towards degree mark or grade’. As previously mentioned it appeared that, for a variety of reasons, there was little formative assessment occurring in biosciences and related programmes. Most assessment was apparently of a summative nature in that it was marked and counted towards students’ module or end of year marks. Summative assessment for Level 1 students (which does not count towards the final degree mark) seemed to mostly consist of multiple-choice questions, short answer questions or supplying the labels to diagrams.