C. Beaumont et al.

Reconceptualising assessment feedback: a key to improving student learning?

Chris Beaumonta[*], Michelle O’Dohertya, Lee Shannonb

aEdge Hill University,UK; b Liverpool Hope University,UK

This article reports the findings of research into the student experience of assessment in school/college and Higher Education and the impact of transition upon student perceptions of feedback quality. It involved a qualitative study of 23 staff and 145 students in six schools/colleges and three English universities across three disciplines. Results show that students experience a radically different culture of feedback in schools/colleges and Higher Education: the former providing extensive formative feedback and guidance; the latter focusing upon independent learning judged summatively. Students perceived quality feedback as part of a dialogic guidance process rather than a summative event. We propose a model: the Dialogic Feedback Cycle, to describe student experiences at school/college and suggest how it can be used as a tool to scaffold the development of independent learning throughout the first year of university study.

Keywords: assessment; feedback; transition.

Introduction

It is well known that assessment defines the higher education curriculum in students’ eyes (Ramsden, 2003) and has a major influence on their learning (Biggs, 2003) - being viewed as a more powerful driver than teaching in determining what students do and how they do it (Boud, 2007). An established principle of good practice is that ‘action without feedback is completely unproductive for a learner’ (Laurillard, 2002, 55). Likewise, a compelling consensus emerges from research that high quality feedback is the most powerful single influence on student achievement (Hattie, 1987; Brown and Knight, 1994) and we know that students want and value quality feedback (Hyland, 2000; O’Donovan, Price and Rust, 2004).Therefore, assessment has long been viewed as the catalyst for improvement in teaching and learning: ‘If you want to change student learning then change the methods of assessment’ (Brown, Bull and Pendlebury, 1997,7) and today, the provision of quality feedback is widely perceived as a key benchmark of effective teaching (Ramsden, 2003) andas a vital requirement in meeting students’ expectations (Higgins, Hartley and Skelton, 2001, 2002).

But how to change assessment in practice to meet these expectations has proved problematic. The issue of feedback quality remains a major concern for higher education institutions. Feedback quality has consistently received the lowest satisfaction scores in the National Student Survey for five consecutive years (National Student Survey, 2005-2009). In 2009 fewer than 55% of respondents in England agreed that feedback had been detailed, prompt or helped clarify understanding. This is in marked contrast to the overall course satisfaction rate which exceeds 80%. (National Student Survey,2009). Whilst this relatively low level of student satisfaction with feedback raises concerns, it is all the more significant because feedback quality has featured as a frequent cause for concern in Quality Assurance Agency subject reviews (Quality Assurance Agency, 2003). In response, tutors will often echo the observation that, ‘it is not inevitable that students will read and pay attention to feedback even when that feedback is lovingly crafted and provided promptly’ (Gibbs and Simpson, 2004, 20) whilst on the other hand research shows that lecturers often believe their feedback to be more useful that students do (Carless, 2006; Maclellan, 2001) and Williams & Kane (2009) suggest students need dialogue with tutors to help them interpret comments.More recently, the NUS Student Experience Report (2008) stated that 71% of students wanted verbal feedback on coursework in an individual meeting but only 25% were given such an opportunity and although 62% of students responded that the timing of feedback met their expectations, of those responding ‘no’, 54% wanted feedback to be returned within one or two weeks. Given the significant implications of these differing perspectives this study addressed one questionthat needed to be asked: what concepts of quality feedback are informing such an apparent mismatch in perceptions?

Frameworks for good practice in feedback have been developed, but it is noteworthy that attempts to conceptualise the nature of quality feedback within higher education have been positioned within a process of formative rather than summative feedback (Gibbs and Simpson, 2004; Nicol and Macfarlane-Dick, 2004, 2006). Furthermoreresource constraints coupled with mass expansion in higher education has reduced opportunities for formative assessment to be practised (Yorke, 2003; Gibbs, 2006). If we take Sadler’s suggestion that summative assessment is largely for the purpose of summarising the achievement of a student, an essentially passive process that does not have any immediate impact upon learning (Sadler, 1989) then today, with increasing numbers of first-year undergraduatesfinding themselves in large classes that ‘end load’ assessment (Hounsell, 2007), summative feedbackremains the dominant discourse (Boud, 2007). But summative judgement is the problem (Burgess, 2007). This article reports staff and first year undergraduate student perceptions of ‘quality feedback’, as they experience and attempt to negotiate the impact of thesechanges.

At the same time, within the school sector a concerted attempt has been made to embed an assessment for learning culture (Assessment Reform Group, 1999; Sutton, 1995) within the curriculum andassessment for learning is now a central part of government policy (Department for children, schools and families, 2008) leading to “a marked divergence in assessment practices and in the assumptions which tend to drive them” between sectors (Murphy, 2006,39). However, whilst seminal research has been conducted on the assessment experience of students in schools (Black and Wiliam, 1998; Black et al., 2003) and universities (Hounsell, 2003), there are few studies that investigate the impact of the former on the latter. This articlesummarises a cross-sector studyfunded by the Higher Education Academy that makes this connection and addresses this gap in the research literature, positioning first-year undergraduate expectations of quality feedback within the context of their prior experience of a culture of formative assessment. In this way, this article attempts to meet the challenge of the call by Haggis to develop our understanding of what we currently deem to be our students’ learning by a ‘step into the unknown’ (Haggis, 2009, 389).

Methodology

The main aims of the research were to:

  1. explore tutors’ and students’ perceptions of what is considered quality feedback;
  2. investigate the impact of prior experiences of assessment on students’ expectationsof feedback practices in higher education;
  3. identify barriers to providing quality feedback.

Qualitative methods were used to provide research findings with a ‘deep’ narrative that can usefully inform what is actually taught (Gibbs, 2002).

Weused semi-structured focus-groups to explore the perceptions of students (n=37) who were applying for university at three schools and three colleges in the north-west of England. Teachers were interviewed (n=13). First-year undergraduateswere also surveyed in focus groups:at a university inthe north of England(N); Psychology (students n=24, tutors n=4); Education Studies (students n=24, tutors n=3); and Performing Arts (students n=17, tutors n=3). A cross-institutional perspective was also obtained by repeatingfocus groups of Psychology students at two other universities, in London (L,n=29),and the midlands (M, n=14).Data collection took place at universities at three points in the year from October to May, to investigate changes in student perceptions. Since focus groups were self-selecting, we also employed a questionnaire which used Likert scale questions to check validity of our findings in October (n= 176) and May (n=64).

Focus group and interview data were recorded and transcribed. Following the approach suggested by Braun & Clarke (2006) for Thematic Analysis, data were examined by searching for commonly occurring patterns of views, experiences and underpinning concepts. This is a theoretical framework in which codes are created by interpreting the data to identify important recurring themes that are of interest, and iteratively refining them for internal coherence and mutual exclusivity (as far as possible). Three researchers independently carried out a thematic analysis of the raw data and subsequently collaborated through an iterative process to reach consensus. Member validation was used to verify interpretations whenever possible.

The sampling procedure involving nine institutions and three disciplines was designed to increase reliability. It is our contention that this approach, which yielded consistent findings, enables us to propose recommendations of interest to the wider academic community.

Analysis of Results

In this section we summarise the results, providing illustrative quotations that were judged typical.

Students’ perceptions of what constitutes quality feedback

We identified two explicitdefinitions from students’ responses in all focus groups:a judgement of the standard reached (How well we’ve done) and instructions for learning improvement (How you could do better). Beyond these interpretations, students’ descriptions of what constituted quality feedback were intertwined with descriptions of assessment tasks, tutors’ behaviour and the general guidance environment that they experienced. We identified two themes which were present in all focus group conversations: feedback as a system of guidance which gave reassuranceand the importance of student-tutordialoguewithavailable or approachable tutors in this process. Quality feedback was also viewed as both written and verbalprovided within the context of a personal relationship and framed by classroom interactions. It wasmost prevalent in school pupils’ responses, but was also recounted by undergraduatesin examples of good practice. The following voices illustrate these key themes:

… personal feedback and being there with the person makes such a difference

… we got like a five-minute meeting with him after the drafts, … a week later, he’d go through everyone’s with them personally to say what you can do, it’s better than having it written down because you don't always understand what he’s written.

From the student perspectiveour analysis shows that quality feedback is perceived as a system of guidance that provides not only a summative judgement of performance but support throughopportunities for a discussion which identifies areas of improvement and scaffolds the student to help achieve higher grades.

In the next section we discuss and elaborate a model of this system, mapped from students’ descriptions of their pre-university experiences, with selected representative quotes relating to emergent themes to provide illustrative evidence.

School students’ perceptions of their pre-university experiences

Student responses frequently revealed that in school the student experience starts with preparatory guidance for an assessed task and progresses through the in-task guidance phase to post-submission performance feedback. A model that describes the process is shown in figure 1 which we call the Dialogic Feedback Cycle. Each of the three stages includes typical activities that students frequently referred to. Each stage is also represented as a cycle to emphasise the iterative dialogue that students often highlighted.

(Figure 1)

In the initial preparatory guidance phase of the cycle, common activities were: the availability of explicit marking schemes and criteria which were discussed in class with the use of exemplars and/or model answers. The opportunity for discussion in class was emphasised. In four out of the six schools/colleges we surveyed, students indicated that information from their previous performance was also used to set target grades for each individual. One student described it as follows:

It’s taken from your GCSEs[English public examination at age 16] and it’s… like the lowest you’re expected to get, so like if you do a piece of work and it’s below, you often have to do it again until it’s either above the target grade or on your target grade.

The importance of grades was emphasised at the start of the assessment process. Assessment criteria are inextricably related to grades, and students in all schools/colleges surveyed displayed a strong awareness of the criteria/marking schemes as demonstrated by these extracts:

... when I did my A-levels [English public examination prior to university entrance]we knew exactly what they wanted from us.

You're encouraged to use it a lot in class, like if you're doing coursework they’ll give you a sheet or like assessment criteria, then they’ll teach you in class how you can do this and help you in your coursework,

The last comment demonstrates two points: firstly, the systematic opportunities provided for active engagement with the use of criteria as students move into the in-task guidance phase of the cycle. Secondly, it shows efforts to promote self-assessment - although very few students reported independently using self-assessment.

Two approaches, identified by both students and teachers, for promoting engagement with the criteria were peer marking and marking of exemplar material. Students in all focus groups reported having experience of peer marking, although it met with a mixed reaction. Some regarded it as constructive and motivational:

It’s good because it gives you somebody else’s perspective on your work that you might not be able to see … it makes you try harder because …, you don't want to look stupid in front of everyone else.

However, a much greater proportion reported negative experiences relating to trust, competency, and plagiarism:

I’d feel like my essay hasn’t been marked properly.

I’ve had my work copied twice.

I didn’t like it at all..lack of trust in other people I suppose

Exemplar material was seen as an essential means of modelling what was required:

… they do it more when you are planning an essay … examples of what you have to put in … but you have to do it though.

Teachers and students cited a high level of discussion and interaction at the in-task guidance phase: the assignments given to students were often broken down into smaller tasks, and students in all focus groups related that they could submit (often multiple) drafts to the teacher; almost all students reported receiving written and verbal feedback within one week of submission. The feedback was frequently reported to be specific and detailed, and face-to-face support was offered both formally (in lessons and timetabled support classes) and informally (for example, at lunch breaks). Students in all the schools/colleges also acknowledged the ease of access to and frequency of teacher support:

At this stage, the role of drafts was identified by both teachers and students as particularly important; some school departments had rules about the number of drafts a student could submit, although students suggested that these were not rigorously applied:

… they had the option of five drafts … then the final.

It’s normally only two but it depends.

… we could hand coursework in as many times as we wanted.

However, this type of support can also be ‘misused’ as both students and teachers identified gaming behaviour associated with the use of drafts:

If you do it too many times, it ends up with the teacher kinda writing it for you

When school students and teachers were told that it was not common practice for students to submit drafts at university, they considered that it would be problematic:

… we’ve learned to rely on drafts and rely on feedback, so if you're not getting that at university, it’s going to be a big shock.

The final stage: performance feedback was usually delivered in both written and verbal form, again providing opportunity for dialogue. Teachers also emphasised the process of consistently and systematically using the criteria laid down by the examination boards:

... they are getting marked according to the exam scheme all the way through and eventually it sinks in … we absolutely hammer, the main thing ...

The students interviewed expressed a strong desire to receive grades/marks together with feedback comments. Both teachers and students perceived the school system as being focused on improving grades. A further theme of reassurance and motivation also permeated the study in responses of students and teachers, demonstrating a strong, shared awareness of the power and impact on self-esteem that assessment and feedback can have.

Feedback on drafts was often reported to be attended to by students and seen as critical,however students mentioned action planning as a result of post-assessment feedback on just two occasions.

When students in school did cite poor feedback examples, they focused upon feedback that could not be understood or provided insufficient detail as to how to improve:

We used to get like question marks next to things and you’d go “what does that mean?”.

‘we don’t get told where we’ve gone wrong, so we don’t know how to improve…’

In general, students self-reported being highly satisfied with their experience of feedback in school/college. In response to our questionnaire about their previous institution 80% agreed that feedback was clearly related to the assessment criteria and useful; 75% that feedback was frequently encouraging, 62% that feedback was provided in enough detail and 65% that they were able to receive feedback on drafts.

University students’ perceptions of first-year experiences

Our survey results show that dissatisfaction with feedback is apparent within the first three months of entering higher education and that perceptions did not significantly alter throughout the first year. Both qualitative and quantitative dataconfirmed that students in the study experienced atransition from high to lower satisfaction ratings for feedback.

Respondents were drawn from two of the three institutions, L (n=61, all Psychology) and N (total n=115, 12 Psychology, 68 Education Studies and 35 Performing Arts). When surveyed about their expectations at university early in their first term, 91% expected feedback to be given in enough time for it to be useful to them. When surveyed later, at the end of their first year, n=64 only 49% agreed that this had been their experience. 92% expected that feedback would help them to improve their work; only 60% felt that they had actually been able to improve as the result of feedback. 89% expected to understand the feedback they were given, but only 65% agreed that they understood the feedback they actually received.

Such results raise the immediate question – by what standard is the quality of feedback in higher education being judged? We suggest that the answer lies in the Dialogic Feedback Cycle model outlined above and we use this model as an analytical framework.