Contents

Introduction

Methodology

Scope and focus of the literature review

Assessment

E-assessment

Assessment feedback

E-feedback (assessment feedback enhanced by technology)

Benefits of e-feedback

ICT tools for e-assessment

Diversity of ICT tools

GradeMark and Electronic Feedback: Exeter University

Intelligent Assessment Technology (IAT): Open University

WebPA: Loughborough University

Formative assessment and e-feedback

MCQ an EVS

Peer and SELF assessment and feedback

Web based systems for self and peer assessment feedback

Peer feedback and performance

Digital feedback

HEA project

JISC Sounds Good Project, Bob Rotheram (2009)

E-Portfolios

Dissertation supervision

Discussion and Further research

Reference list

Introduction

When used effectively, Information and Communication Technology (ICT) can provide a unique learning environment that enhances the different aspects of teaching, learning and assessment. The application of technology in education is especially significant in the current environment, where the Higher Education sector must to respond to the challenges of increased student numbers, limited public funding and demands from students who often regard themselves as customers requiring a high-quality learning experience.

Therefore, it is important to review the current literature with regard to the use of technology in teaching and learning in order to share the best practices and guide future research. This paper reports on selected aspects of the use of electronic tools in e-feedback (Assessment Feedback Enhanced by Technology).

This report extends the previous literature reviews by providing references to 109 sources, and covers following themes: benefits of e-feedback, ICT tools for e-feedback, formative e-feedback (peer and self-directed learning), digital feedback, feedback in e-portfolios, feedback in dissertation supervision, and areas for further research.

Methodology

The literature review used a rapid evidence assessment approach (Slavin 2003). This involves establishing criteria for selection and inclusion of studies to be reviewed; followed by analysis and comparison of the studies included.

Several combinations of search terms (web-based + assessment + feedback; internet + assessment + feedback; feedback + assessment + technology; computer assisted + feedback; e-feedback + assessment) in major educational Databases (ERIC, SCOPUS, British Library Integrated Catalogue, Dissertations& Theses) resulted in over 6000 titles. Searches were further limited by date (2000 – 2010). Using the abstract as guides the potential number of articles for initial review was reduced to 200 articles, and then following further reading and analysis to 130papers in total.

The main group of articles excluded was that of studies relating to: school education, general e-assessment, and technical discussion of technologies. Excluding these studies was felt to be appropriate given that the aim of the review was to focus on HE, technology and feedback, rather than on e-assessment in general.

The final sources were selected based on journal impact factor, reputation and relevance to the study. Detailed reading of the articles, led to a further reduction to the 109sourcesreferenced in this paper, as the most relevant to this analysis.

This report starts with a brief overview of the literature on e-Assessment, and Assessment Feedback to place the literature review on e-Feedback in a wider context.

Scope and focus of the literature review

Assessment

Recently, we have observed the proliferation of research projects and papers on assessment as a response to the current challenges in the HE sector fuelled by: increasing student numbers, reduced resources in the HE sector and the consumerism of HE, where students are more vocal regarding their learning experience.

Assessment is an essential part of the teaching and learning process as it ‘defines the actual curriculum’ (Ramsden, 1992, p.187),frames student learning, and determines ‘what students regard as important’ (Brown et al, 1994, p.7). Despite its significance educators often fail to recognize or apply methods to improve assessment process. As a result students’ perception of the assessment process in Higher Education (HE) expressed in the National Student Survey (NSS) is rather negative and seems to be ‘the Achilles’ heel of quality’ (Knight, 2002, p.107).

E-assessment

The importance of e-assessments, e-feedback and related issues is reflected by the edition of recent special issues by selected journals.

The special issue on Computer-assisted Assessment (CAA) from the Assessment & Evaluation in Higher Education Journal (2009) covers a wide range of topics including: rationale for making CAA more inclusive for students with special needs (Ball, 2009); sophisticated e-assessment tasks addressing summative and formative assessment purposes (Boyle and Hutchinson, 2009); peer assessment (Davis, 2009); formative feedback enabling students to develop self-directing skills (Nicol, 2009); Framework Reference Model for Assessment, FREMA (Wills, 2009).

The British Journal of Educational Technology also devoted a special issue in 2009 to addressing ‘E-assessment: Developing new dialogues for the digital age’. In the editorial Denise Whitelock (2009) argues that it is important to ‘construct a pedagogically driven model for e-assessment that can incorporate e-assessment and e-feedback into a holistic dialogic learning framework, which recognises the importance of students reflecting and taking control of their own learning’ (p.199). A number of papers highlight the challenges of e-assessment including: extra stress imposed on students taking CAA (Sieber, 2009); the question to what extent e-assessment enhances student learning (Angus and Watson, 2009); electronic voting system used for promoting deep learning (Draper, 2009a, 2009b); enhancement of feedback in dissertation supervision (Heinze and Heinze, 2009); e-portfolios promoting active engagement in student-centred learning groups (Barbera, 2009; Chang and Tseng, 2009).

International Journal of Technology Enhanced Learning also announced a call for papers for the special issue in 2010 on Technology enhanced learning: Personalisation strategies, Tools and Context Design.

Assessment feedback

Assessment Standards Knowledge exchange centre (ASKe, 2008) argues that one of the key reasons for assessment failing to support learning is lack of engagement and ineffective feedback as ‘action without feedback is completely unproductive for the learner’ (Laurillard, 1993, p.61). The main challenges of assessment feedback are identified in literature as: student engagement; limited time and institutional resources; quality and frequency of feedback; understanding and interpretation of feedback;accessibility and legibility of feedback; purpose of feedback (Price and O’Donovan,2006; Handley et al, 2007; Nicol, 2007;Nicol and Macfarlane-Dick, 2006; McDowell at al, 2005; Millar, 2005; Winter and Dye, 2004; Bloxham and Boyd, 2007; Higgins et al, 2002).

Millar (2005) provides an extensive literature review on assessment feedback outlining conceptual models (Sadler, 1989, 1998; Rust, 2000), student preferences and approach to feedback; feedback content and communication; staff perspective; principles of good feedback (Rust et al, 2003, 2005; Juwah et al, 2004; Gibbs & Simpson, 2004). The most recent principles not included in the above review are proposed by Nicol (2007) in Box 1.

Box 1: Ten Principles of Good Assessment and Feedback Practice

Good assessment and feedback practices should:
  1. Help clarify what good performance is (goals, criteria, standards). To what extent do students in your course have opportunities to engage actively with goals, criteria and standards, before, during and after an assessment task?
  2. Encourage ‘time and effort’ on challenging learning tasks. To what extent do your assessment tasks encourage regular study in and out of class and deep rather than surface learning?
  3. Deliver high quality feedback information that helps learners self-correct. What kind of teacher feedback do you provide – in what ways does it help students self-assess and self-correct?
  4. Encourage positive motivational beliefs and self-esteem. To what extent do your assessments and feedback processes activate your students’ motivation to learn and be successful?
  5. Encourage interaction and dialogue around learning (peer and teacher student. What opportunities are there for feedback dialogue (peer and/or tutor-student) around assessment tasks in your course?
  6. Facilitate the development of self-assessment and reflection in learning. To what extent are there formal opportunities for reflection, self-assessment or peer assessment in your course?
  7. Give learners choice in assessment – content and processes To what extent do students have choice in the topics, methods, criteria, weighting and/or timing of learning and assessment tasks in your course?
  8. Involve students in decision-making about assessment policy and practice. To what extent are your students in your course kept informed or engaged in consultations regarding assessment decisions?
  9. Support the development of learning communities. To what extent do your assessments and feedback processes help support the development of learning communities?
  10. Help teachers adapt teaching to student needs. To what extent do your assessment and feedback processes help inform and shape your teaching?

Source: Nicol, 2007(see more details at

The above literature review can also be extended by Draper’s (2009b) recent work exploring the important question: what are learners regulating when given feedback? He points to the multiple, alternative interpretations of feedback events. For example, the rational explanations of students failing are explained in Box 2 (p.308):

Box 2: Interpretation of possible reasons for student failure

  1. Technical knowledge or method: I did not use the best information or method for the task, but can improve it and do better next time.
  2. Effort: I did not leave myself enough time to do it well. (Almost everything we do in life is time limited. If it is important enough, then putting more effort in will get a better result. On the other hand, everyone including students has limited time and must save time from some activities to invest in other ones).
  3. Method of learning about the task: I did not seek the right information to make a good job application; I did not test my paper on the right audience; I should change my revision method for this course; I should have discussed what the criteria really meant before writing the essay.
  4. Ability, trait, aptitude. This result tells me about relatively unchangeable traits. I should apply for a different kind of job, change the course I am studying on.
  5. Random: I did the right thing but the process is not deterministic. Another time I will succeed without changing what I do. If it rains when I go for a picnic at a beauty spot, it does not mean either that picnics are bad or that that spot is ugly; not every lottery ticket is a winner; not everyone I ask to fill in a questionnaire for me will agree to.
  6. The judgement process was wrong; I was right. Appeal the mark the tutor gave me; find the bug in the compiler not my program re-educate my readers; find a different audience.

Source: Draper, 2009b

Draper suggests that the interpretation of feedback based on a single variable will cause frustration about learning. Therefore, when giving feedback, effort should be made to address all different variables.

In 2009, ASKe established the ‘Osney Grange Group’ (ASKe, 2009), proposing that current feedback practices in HE are oftenfounded on myths, misconceptions and mistaken assumptions that undermine student learning (see Box 3).

Box 3: The Osney Grange Group proposes the following agenda for change:

  1. It needs to be acknowledged that high level and complex learning is best developed when feedback is seen as a relational process that takes place over time, is dialogic, and is integral to learning and teaching.
  2. There needs to be recognition that valuable and effective feedback can come from varied sources, but if students do not learn to evaluate their own work they will remain completely dependent upon others. The abilities to self and peer-review are essential graduate attributes.
  3. To facilitate and reinforce these changes there must be a fundamental review of policy and practice to move the focus to feedback as a process rather than a product. Catalysts for change would include revision of resourcing models, quality assurance processes and course structures, together with development of staff and student pedagogic literacies.
  4. Widespread reconceptualisation of the role and purpose of feedback is only possible when stakeholders at all levels in Higher Education take responsibility for bringing about integrated change. In support of this reconceptualisation, use must be made of robust, research-informed guiding principles, and supporting materials1.
  5. The Agenda for Change calls on stakeholders to take steps towards bringing about necessary changes in policy and practice.

Source: ASKe, 2009 (see more resources at the )

E-feedback (assessment feedback enhanced by technology)

Since ‘student feedback is indeed an important element of e-assessment in that it can offer new forms of teaching and learning dialogues in the digital age’ (Whitelock, 2009, p. 202), this report reviews current developments in the area of e-Feedback.

Substantial parts of research in the area of e-feedback explore the application of particular technological advancements in an educational context, and evaluate some pedagogical aspects of using technology in education.

Benefits of e-feedback

The report on Technology-enabled feedback (HEA, 2008) summarises benefits of e-feedback such as: the legibility of electronic feedback (van den Boom et al, 2004; Guardado and Shi, 2007; Tuzi, 2004) reduction in assignment turnaround time, efficiency in administration and reduction in paper used (Price and Petre, 1997; Jones and Behrens, 2003; Bridge and Appleyard, 2005).

Other benefits have been recognised in a case study of a web-based course in primary care (Russell, Elton, Swinglehurst and Greenhalgh, 2006). The specific advantages include: the use of hyperlinks and attachments in virtual communication which enable tutors and students to easily suggest additional relevant resources; copying others into a communication; joint feedback in specific areas; the ‘senior common room’ forum for staff where feedback can be discussed facilitates team teaching and contributes to the quality of assessment feedback.

Another desirable outcome of using ICT tool in e-assessment is improved efficiency with regard to time and resources. The question is to what extent technology could address the most pressing challenge of quality feedback: time (Linn and Miller, 2005, Heinrich et al, 2009). And arguably the areas were e-tools can make a real impact on efficiency is administration (Heinrich et al.2009, p.472):

‘providing documents, easily accessible to all involved, anytime and anyplace; accepting assignment submissions, managing deadlines, recording submission details, dealing with safe and secure storage; managing the distribution of assignments to markers and facilitating the communication within the marking team; returning marking sheets, commented-on assignments and marks to students; storing and if necessary exporting class lists of marks.’

Using e-tools for these tasks frees up time that can be used for focusing on quality feedback. Participants in the above study saw benefits in (a) using stock comments from a large bank so that comments could be individualised; (b) providing feedback online as it eliminates the problem of students not being able to read a lecturer’s handwriting, and allows providing references to resources in the form of links to articles and books; (c) using electronic marking sheets returned to students by email.

The evaluation of WebPA system (Loddington, 2009) also suggest a numerous benefits for: (a) the institution (QA, records are stored centrally, flexibility and accessibility); (b) academic tutors (save time/reduce workload, transparency, confidence that the process is fair, reduce the number of complaints); (c) students (getting timely feedback, opportunity to reflect, enhancing skills such as communication, teamwork, monitoring, rewarding/penalising).

ICT tools for e-assessment

This section provides an overview of a variety of different software and technologies useful for e-feedback, followed by examples of application of particular systems by practitioners at particular universities.

Diversity of ICT tools

Grover (2008a, 2008b) argues that effective observation and diagnosis of student learning can be greatly assisted by 21st century technologies and lists five practical tools to help tutors measure student progress: clickers, online quizzes, web-based surveys, digital logs, and spreadsheets.

Other examples of technological advancements are discussed by Fisher and Baird (2006) who provide an overview of mLearning applications used to promote student engagement in teaching and assessment including: Virtual Graffiti, BuddyBuzz, Flickr, and RAMBLE. Quantitative data support their hypothesis that mLearning technologies can provide a platform for active learning, collaboration, and innovation in higher education.

On the other hand Roland (2006) emphasizes the role of technology in easing the teacher's burden and discusses online technologyassessment tools such as: Certiport's Internet and Computing Core Certification; Thomson Learning's Skills Assessment Manager (SAM) Computer Concepts; Learning.com's TechLiteracy Assessment(TLA) etc.

Vendlinski et al., (2008) describes a web-based assessment design tool, the Assessment Design and Delivery System (ADDS),that provides teachers both a structure and the resources required to develop and use quality assessments.

Northcote (2002) summarises a various software programmes developed to create on-line environment, such as WebCT, BlackBoard and TopClass, BrainZone (Strassburger, 1997), Question Mark Designer (Pritchett and Zakrzewski, 1996), WebTest (Doughty, 2000), PsyCall (Buchanan, 1998).

GradeMark and Electronic Feedback: HEA project

Jones (2007) evaluated following ICT tools designed to provide feedback: (a) Electronic Feedback 13; (b) M2AGICTM; (c) GradeMark (an extension of the Turnitin UK plagiarism detection software). The study addressed students concerns with feedback - both its formative value and the promptness of its return.

Student feedback was positive: the quantity and quality of feedback was seen as being better, as was their understanding of the feedback and where the mark came from. Students particularly liked that comments had been personalised and did not appear to be computer-generated.Students also liked peer-evaluation, as that they could compare their performance with the rest of the class.

The study established that all three tools have the potential to enable tutors to provide students with better quality, personalised feedback. But for successful application of these tools staff need to: be computer literate; allow time for familiarisation and preparation; link marks with assessment criteria.

Intelligent Assessment Technology (IAT): Open University

Another interesting ICT tool is the Intelligent Assessment Technology (IAT) engine developed by the Open University (Jordan and Mitchell, 2009). IAT has been used to author and mark short free-text assessment tasks. The system was designed to provide students with ‘instantaneous feedback on constructed response items, to help them to monitor their progress and to encourage dialogue with their tutors’ (p. 371). The feedback is specifically tailored and detailed to allow students to improve their incorrect and incomplete responses, and consequently ‘close the gap’ between their current and desired performance (Sadler, 1989).