/ Enhancing Learning Through Technology
Research Project Report 08/09
Technology Enhanced Feed-Forward for Learning
Dr Sue Rodway-Dyer and Elisabeth Dunne December 2009

Abstract

This study explored feedback via i) audio feedback on a first year undergraduate written assignment; ii) video feedback from ongoing laboratory sessions with first-year Biosciences students; and iii) audio feedback within clinical settings for medical students. Student and staff views have been gained via surveys, focus groups, individual interviews and ‘stimulated recall’ sessions. Students have high expectations in relation to feedback; wanting individual face-to-face interaction as experienced in school and are not easily satisfied by other ways of working. Offering audio or video feedback that is supportive to learning in both affective and cognitive terms is not necessarily easy. Clear practices need to be established for dissemination and engagement with audio feedback, tone of voice and register of language, optimum time length and style of feedback.

Laboratory video feedback proved useful for training purposes in order to enable student demonstrators to be more effective and knowledgeable when offering feedback to students. The stimulated recall process proved to be a highly effective professional development tool.

Background

Rationale:

Exeter is now regarded as a ‘top-20’ university and has ambitions to be within the ‘top-10’ by 2012. In order to achieve this, it is strongly recognised that the student learning experience has a major part to play, and that Assessment and Feedback are key to student satisfaction. The National Student Survey places Exeter fourth overall in relation to traditional institutions but, as is the case nationally, responses to Feedback generally fall below scores on other aspects, and this has been reiterated by internal institutional and subject-based evaluations. Hence it could be said that improving feedback is mission-critical for the institution. It should also be considered mission-critical for every individual student, since feedback has been demonstrated as the most powerful influence on student achievement (Hattie, 1987; Black & Wiliam, 1998).

We proposed to explore ways of improving feedback for students. This was achieved specifically through the use of digital audio and screen-visual feedback within our e-learning environment. We used readily available technology and known modes of delivery, including our institutional VLE as well as portable devices, thereby making feedback more accessible and better attuned to today’s student.

Textually annotating assessments by analogue means can be time-consuming and tedious for academics, and written comments invariably lack emphasis and intonation. Being able to capture the tutor’s voice, or using visual highlighting of material, can provide a less stylised and more genuine means of feedback. These methods can be used for both individual and group feedback, with the potential to be less time-consuming and more direct for the tutor, and more immediate and satisfying for the student. Through providing feedback in a more engaging, digital form, it is anticipated that students’ attention will be gained more readily – as a first step towards their engaging with learning outcomes, reviewing feedback and acting upon it.

The technological aspirations outlined above are not new but the recent improvements in technology, together with the national urgency to improve student experiences allied to assessment and feedback, have given this agenda an added momentum. However, technology cannot, of itself, promote enhanced learning. The concept of academics offering feed-forward for learning is crucial, with students being given explicit points for action and improvement that support them in becoming self-regulated and reflective learners (Zimmerman, 2002).

Links to existing literature:

In 1992, Ramsden suggested that technology is changing the nature of university teaching. However, George (2002) considers it ‘an enabler, not a solution’ and McGettrick et al (2004) believe that e-learning remains one of the ‘grand challenges’ for education. Laurillard (2002) helpfully argues that any study of new approaches to technology should fit firmly within sound pedagogic principles and practices, and Stiles (undated) suggests that no sustainable change will happen unless traditional pedagogy is adapted for more active approaches to learning

Literature on feedback is diverse; none-the-less, common themes emerge. For example, students need to:

  1. be given feedback that is timely (Cowan, 2003; Nicol & Milligan, 2006), and legible;
  2. recognize when they are being given feedback;
  3. understand their feedback and perceive it as useful, not confusing or contradictory (Irons, 2008);
  4. recognize that the specific addressing of assessment criteria can support understanding and development (Black et al, 2002; Sadler, 1989).
  5. use information gained from feedback (Gomez 2005); learning requires that students be actively involved in monitoring their own performance (Falchikov, 2005); hence ‘feedback’ and ‘feed-forward’ are complementary elements of the learning process and should focus on what needs to be done to improve.

These, and other national challenges relating to feedback and feed-forward, informed the project conceptualization and data collection. The project will complement and build on work published by the Scottish QAA enhancement project, several FDTL projects (notably the FAST project - Phase 4 FDTL at the Open University and Sheffield Hallam) and the work of the Higher Education Academy Subject Centres.

In the context of technology-enhanced feedback, including aural feedback, rhetoric abounds: how much better, for example, to have ‘the tone of voice, emphasis on particular words, the effective use of pauses, and the warmth of an encouraging tone when critical comments need to be made’ (Race, 2008). The new interest in aural feedback has led to a number of small-scale practitioner studies, and they do suggest that intonation counts; also that digital feedback suits today’s student (McCormack and Taylor, 2006; Denton et al, 2007; Bridge and Appleyard, 2007); that video feedback is preferred (Stannard, 2008); that immediate spoken observations on students’ practical sessions can serve as useful feedback (Epstein et al, 2002); that aural feedback tends to be more extensive, easier to access and understand, and with more depth (Merry et al, 2007; Gomez, 2008; Rotherham, 2008); and that it enables students to address their overall learning development (Ribchester et al, 2007).

However, there are warnings. Nortcliffe and Middleton (2007) describe an analogue-recorded feedback study wherein audio impacted on self-reflection and action, was preferred by students, and was less stressful and time consuming for staff; yet their most recent digital work suggests that audio feedback does not necessarily support achievement; and Irons (2008) argues that using technology for formative feedback ‘is not a cheap or easy option’.

Aims

  • To refine understanding of the impact of technology-enhanced feedback methods (i.e. audio-feedback) on staff and students, in order to inform future feedback practice and strategy development (feed-forward interaction).
  • To encourage academics to consider key factors in effective feedback in order to promote a culture of positive engagement in feedback by students.
  • To ascertain whether the WrAssE technique of analysis (designed by researchers at the University of Plymouth for content analysis of written feedback) can be applied to audio feedback, or whether it needs to be adapted or further developed to provide appropriate descriptions for audio feedback.
  • To test the ‘stimulated recall’ discussion methodology to ascertain its usefulness both as a research tool, and in engaging staff with addressing their practice.
  • To provide resources and items for dissemination that can inform research and practice within The University of Exeter(for example, a guide to audio-feedback), and within the sector more broadly(for example, publications in peer reviewed journals).

Objectives: The project was designed to explore the provision to students of audio versions of their current face-to-face aural feedback sessions, on the assumption that this will further support student learning.

Methods

This study addressed audio feedback in three subject contexts: Geography, Biosciences and Medicine. Academics were involved within each subject area, each of them working in a different way dependent on their normal working context. Detailed data was gathered from the three subject areas, and from the practice of six academics (1 from Geography, 2 from Biosciences and 3 from Peninsula College of Medicine and Dentistry). Three academics were involved in ‘stimulated recall’ interviews.

The three contexts in which we have been working were:

  1. Geography – detailed audio feedback was given by a lecturer in the form of podcasts (MP3 file) to first year undergraduate students (N=73) after their first 1500 word written assignment towards the end of their first semester for an Introduction to Earth System Science module. Data was gathered by the lecturer via a brief questionnaire on student perceptions of the value of such feedback after both feedback and assignments had been returned to students. A small number of students participated in two focus groups each lasting about 1 hour, selected to represent a) those who had opted for human geography and b) those who were taking physical geography. This was followed by short (5 minute) individual interviews. Students were emailed and asked to reflect on their views of audio feedback after 6 months and then a follow-up questionnaire was issued after 1 year to ascertain whether time had allowed the students to learn and feed-forward.

Additionally, the written and audio feedback from the tutor was compared for similarities and differences in the style of feedback. A functional-qualitative framework based upon WrAssE (a LearnHigher project on analysing text-based feedback at the University of Plymouth: plymouth.ac.uk/wrasse/ ) was developed in relation to analysis of audio feedback. The tutor participated in a one hour stimulated-recall interview, which reflected on the quality of the feedback he had given, and highlighted potential areas for improvement in future practice.

  1. Biosciences – in this context, the study addressed ongoing feedback within the laboratory setting with 180 first year students. Support is offered to students throughout the 3-hour session by a lecturer and also via graduate teaching assistants and student demonstrators. These sessions were videoed on five occasions for this project, so as to capture the variety of ongoing feedback to students, parts of which were made available to students in video form via WebCT. A ‘Dragon’s Den’ activity provided further feedback for students and this, too, was videoed and made available to students for future reference. Additionally, for Third Year Undergraduate Environmental Microbiology workshops, which consisted of 2 sessions, the video footage was turned into a movie clip and loaded onto the Web and shown in the laboratory for students to use as a revision aid.

141 first and second year students completed a survey on feedback in laboratory sessions and beyond, in order to provide background data on, for example, the types of feedback valued by students, what helps them learn, and the forms of audio and video already used by them and whether these should be used for feedback purposes.

Additionally, two lecturers participated in one hour stimulated recall interviews, which reflected on the quality of feedback given within the laboratory sessions, the environment and potential areas for improvement.

  1. Peninsula College of Medicine and Dentistry (PCMD) – three academics made audio recordings of naturally-occurring student activities involving individual tutee portfolio discussions or SSUs (student selected units) and academic clinical feedback sessions for 5 students. Post audio feedback email questions were sent to the students to establish their views on the usefulness of audio feedback. Again, stimulated recall discussions with tutors who engaged in the project reflected upon the feedback given and WrAssE analysis of the feedback was carried out.

Ethics approval was sought from the University to ensure that safe and appropriate procedures were followed. This was of particular importance for PCMD and involved a long process covering 12 months whereby a plan for the research was developed, sent for approval and repeatedly adapted taking into account the changes in staffing during the process and potential student sessions, therefore juggling staff responses and timetabling with getting the correct data through approval for it to happen.

Results

Geography results (Rodway-Dyer et al, pending publication)

The student retrospective questionnaires showed that:

  • The majority of students listened at least once; most listened twice, someup to 4 or 5 times to the audio recording
  • 80% thought both audio and written feedback to be useful
  • The advantages of audio feedbackincluded greater detail, depth, clarity and ease of understanding
  • The disadvantages were relating the feedback to a specific point in the essay and negative experiences (not always attracted to the tone of voice)
  • Students mentioned problems with illegibility of handwriting (20%)
  • 10% thought it did not have an impact on their future performance
  • 76% wanted face-to face-feedback from a tutor

In the longer term audio feedback was seen to have provided Feed-forward. Students remembered detailed comments on: essay structure, use of case studies, diagrams, wider reading, referencing and content.

Student focus groups and interviews highlighted that the students were:

  • not used to non traditional marking
  • not expecting audio feedback (even though they had been told)
  • not sure what was expected for university standards
  • not settled into University life and able to take the criticism
  • not used to looking at marking criteria

However, they:

  • realised they needed criticism to improve
  • now understood marking criteria
  • acknowledged the amount of detailed feedback given to them
  • had since gained higher marks

Analysis of the two feedback types did show differences between audio and written feedback. Audio feedback allowed for:

  • justification of the grade referring to the marking criteria
  • emphasis on specific words and essay sections
  • specific examples of what to expand on and how
  • referred to lecture notes
  • comments on structure

Written feedback allowed for:

  • corrections on spelling and grammar
  • single comments on labels, dates etc as needed

The WrAssE analysis of the audio feedback allowed a detailed study of the feedback types and qualities in Geography. Percentage figures relate to utterances, classified as a short sentence or phrase on a continuous topic:

  • Feedback function - analysis of essay content: 45%
  • Feedback function - description of intended feedback: 23%
  • Feedback function – evaluation: 15%
  • Quality functions of structure, authority and voice: 17%

The balance of negative comments (mean=16) outweighed the positive comments (mean=5) reflecting the critical nature of the feedback with areas for improvement. 6% of the feedback involved asking questions.

Biosciences results

The video context in Biosciences was far more complex. There was an ever-changing mixture of whole class teaching, juxtaposed with periods spent talking with small groups or individuals, and with students participating in conversations. Different students or groups would have had different experiences in relation to feedback. It was decided that the WrAssE analysis was not appropriate as a tool in this context (indeed it was not designed for anything other than textual analysis). As a consequence of this, the following kinds of clips were produced for future practise, student revision and teaching aids:

  • Introductory briefings and de-briefs for the demonstrators and GTA were created from the footage for each session for briefings or revision.
  • Examples of good practice between the lecturer and students or demonstrators and students, such as questioning, getting students to explain work back and positive praise created a resource bank to help in the training of demonstrators/GTAs.
  • Video clips were also created showing the lecturer – student introductory briefings whereby the topic is introduced, related to previous work and set into the context of what will happen next for students to look at.
  • Any other important instructions were created as a series of ‘Top Tips’ clips for each session or detailed explanation of various practical problems such as ‘Possible Problems with Samples’ and ‘Hyperladder explained’ etc. On the whole clips tried to have a theme and were generally kept to less than 10 minutes because of downloading times and maintaining interest. By having the topics split it also enabled students who were less sure about their understanding to concentrate on key facts or skills.

The student questionnaire showed that:

  • All students expected to receive feedback and wanted verbal feedback that consisted of positive constructive criticism
  • Feedback was thought to be clear and understandable, immediate and timely
  • The majority felt that feedback is crucial to degree performance
  • The majority (84%) wanted some kind of feedback within every practical
  • 20% felt that they did not receive class feedback from the lecturer on a regular basis
  • Individual feedback from the lecturer –only 5% believed they received this every session, 32% on occasion and 39% never
  • Demonstrator feedback–just over half believed they received feedback every practical or every other practical
  • The majority of students felt demonstrator feedback was positive/useful but sometimes showed a lack of understanding and ability in answering student questions
  • 70% felt they received feedback from their peers on a regular basis
  • 20% wanted regular individual face-to-face feedback with the tutor in lab sessions

Interviews with the GTA and Demonstrators highlighted that the administrative role of the GTA (which involved taking registers, creating teams for student tasks and helping the lecturer) meant that students only asked for help over protocol or hand in dates. The GTA expressed worries regarding the training of demonstrators with briefing sessions often just prior to the practical:

“I would like to perhaps see demonstrator briefing sessions prior to the practical.Sometimes briefings are carried out half an hour before the practical session begins, which can cause problems if the subject area is very different to a demonstrator’s background, so a bit more time to gain some understanding with the help of the practical co-ordinator would help, rather than in some cases learning new techniques, etc, just before we're supposed to teach them to undergraduate students.”