First-Year Students’ Appraisal of Assessment Tasks: Implications for efficacy, engagement and performance.

Alf Lizzio and Keithia Wilson

Griffith University

Corresponding Author: Professor Alf Lizzio, Griffith Institute for Higher Education, Griffith
University, Queensland Australia. Email:

Acknowledgement: This study was funded by a grant from the Australian Learning and Teaching Council.

Assessment & Evaluation in Higher Education, Volume 38, Issue 4, 2013

Abstract

This study investigated students’ appraisals of assessment tasks and the impact of this on both their task-related efficacy and engagement and subsequent task performance. Two hundred and fifty-seven first-year students rated their experience of an assessment task (essay, oral presentation, laboratory report or exam) that they had previously completed. First-year students evaluated these assessment tasks in terms of two general factors: the motivational value of the task and its manageability. Students’ evaluations were consistent across a range of characteristics and level of academic achievement. Students’ evaluations of motivational value generally predict their engagement and their evaluations of task manageability generally predict their sense of task efficacy. Engagement was a significant predictor of task performance (viz., actual mark) for exam and laboratory report tasks but not for essay-based tasks. Findings are discussed in terms of the implications for assessment design and management.

This study seeks to understand the factors that students’ use to appraise or evaluate an assessment task and the consequences of their perceptions for their subsequent motivation and performance. An evidence-based understanding of the processes that influence students’ engagement with assessment is particularly important for informing our educational practice with first-year or commencing students who are relatively unfamiliar with the culture and context of university-level assessment.

While the way in which students approach learning may, to some extent, be a function of their personal disposition or abilities, the nature of the learning task itself and the environment in which it is undertaken also significantly mediate their learning strategy (Fox, Mc Manus, & Winder, 2001). More accurately, it is students’ perceptions, rather than any objective features of tasks, that are crucial in shaping the depth of their engagement. In this sense, students’ learning approaches (process factors) and academic performance (product factors) are influenced by their appraisal of, and interaction with, the curriculum content, design and culture of their current ‘learning system’(presage factors) (Biggs, 2003). Central to this process is a significant body of research indicating students’ perceptions of the methods, modes and quantity of assessment to be perhaps one of the most important influences on their approaches to learning (Entwistle & Entwistle, 1991; Trigwell & Prosser, 1991; Ramsden, 1992; Lizzio, Wilson & Simons, 2002). Indeed it has been argued that students’ perceptions of assessment tasks ‘frame the curriculum’ and are more influential than any intended design elements (Entwistle, 1991), to the extent of potentially overpowering other features of the learning environment (Boud, 1990).

The appreciation that assessment functions not only to grade students, but also fundamentally to facilitate their learning, is central to the paradigm evolution from a traditional summative ‘testing culture’ to an integrated ‘assessment culture’ (Birenbaum, 1997). Understanding how students appraise or construct their learning is the foundation of design frameworks such as integrated teaching (Wehlburg, 2010) and constructive alignment (Biggs, 1996a) which view students’ engagement and performance with assessment as a complex interaction between learner, task and context. From this perspective, the consequential validity of an assessment task or mode (viz., its positive influence on students’ learning approaches and outcomes) is a key consideration (Dierick & Dochy, 2001). The importance of students perceiving course objectives and pedagogy to be congruent, besides satisfying the test of ‘common-sense’, has also received empirical support. For example, a curriculum that emphasises acquisition of knowledge and a concurrent assessment package that emphasises problem solving (Segers, Dierick, & Dochy, 2001) has been found to contribute to both sub-optimal learning and student resentment. The implications of these findings for educational practice are quite fundamental. A foundational educational discipline would appear to be the need to distinguish between our educational intentions (however worthy) and their impact on students. This requires us to more closely examine the ‘hidden curriculum’ (Snyder, 1971) of our assessment practices. If we want to understand and evaluate our learning environments we need to authentically understand how students experience them. Thus, if our aspirations are to influence students towards deeper learning and higher-order skill development then a prerequisite task is to appreciate students’ perceptions of the assessment tasks with which we ask them to engage.

Students’ Perceptions of Assessment

Students’ preferences for different modes of assessment (Birenbaum 2007) and the effects of different assessment methods (e.g., multiple choice formats, essays) on whether students’ adopt deep or surface approaches to study (Scouller, 1997; Trigwell & Prosser, 1991; Thompson & Falchinov, 1998) have been well-established. It is interesting to note that while there may be clear differences in the ways that students with different study strategies approach assessment, there are also significant commonalities in their criticisms of the processes of some traditional assessment practices (Lindblom-Ylanne & Lonka, 2001). A series of qualitative studies have investigated students’ perceptions of the assessment characteristics that they report as positively influencing their learning and engagement. Mc Dowell (1995) found that perceived fairness was particularly salient to students’ perceptions of both the substantive validity and procedural aspects of an assessment task. Sambell, Mc Dowell and Brown (1997) extended this early work and identified the educational values (authentic tasks, perceived to have long term benefits, applying knowledge), educational processes (reasonable demands, encourages independence by making expectations clear), and the consequences (rewards breadth and depth in learning, rewards effort) of assessment that influence students’ perceptions of its validity. More recently, Savin-Baden (2004) examined students’ experience of assessment in problem-based learning programs and identified two meta themes in students’ comments, and conceptualised these as forms of student disempowerment: unrewarded learning (including the relationship between the quantity of work and its percentage weighting), and disabling assessment mechanisms, including both processes (e.g., lack of information and inadequate feedback) and forms (assessment methods that didn’t fit with espoused forms of learning). Further empirical support for the centrality of notions of empowerment and fairness to students comes from studies of social justice processes in education. For example, Lizzio, Wilson and Hadaway, (2007) found students’ perceptions of the fairness of their learning environments were strongly influenced by the extent to which they both feel personally respected by academic staff and the adequacy of the informational and support systems provided for them to ‘do their job’, of which assessment was a core component.

What these investigations share is an appreciation of the value of a situated and systemic investigation into the student experience. The focal question is not just ‘what type of assessment’ but ‘what type of assessment system’ are students experiencing on both cognitive and affective levels. Thus, there is a balanced concern with the impact of both assessment content and assessment process (Gielen, Dochy & Dierickl, 2003) on student learning and satisfaction. Hounsell, Mc Cune, and Hounsell (2008) have extended this guiding notion of ‘assessment context’ by operationalising the prototypical stages of an assessment lifecycle and identifying students’ needs and concerns as they engage with, and seek to perform, assessment tasks (viz., their prior experiences with similar assessment, their understanding of preliminary guidance, their need for ongoing clarification, feedback, supplementary support and feed-forward to subsequent tasks). Hounsell et al., in particular demonstrated that the perceived adequacy of guidance and feedback as students attempted tasks were central to their success. Lizzio and Wilson (2010) utilised Hounsell et als'., (2008) framework to investigate first-year students’ appraisal of a range of assessment tasks using focus groups and individual interviews. These students evaluated their early university assessment tasks in terms of seven dimensions (familiarity with type of assessment, type and level of demand/required effort, academic stakes, level of interest or motivation, felt capacity or capability to perform the task, perceived fairness and the level of available support). The dimensions identified in this study confirm a number of the themes (e.g., demand, motivation, fairness, and support) commonly identified in previous investigations of student perceptions of assessment.

Parallel to this line of inquiry has been the development of a number of good practice frameworks to guide the design of assessment protocols. For example, Gibbs and Simpson (2004) identified eleven conditions under which assessment supports students’ learning, and developed the Assessment Experience Questionnaire (AEQ) as a measure of the extent to which these conditions (viz., good distribution of time demands and student effort, engagement in learning, appropriateness of feedback, students’ use of feedback) were evident in a particular learning context. More recently, Boud and associates (2010) developed a set of seven propositions for assessment reform in higher education. The propositions address the centrality of assessment to the learning process (assessment for learning placed at the centre of subject and program design) and both questions of academic standards (need for assessment to be an inclusive and trustworthy representation of student achievement) and the cultural (students are inducted into the assessment practices and cultures of higher education) and relational (students and teachers become responsible partners in learning and assessment) dimensions of assessment practice that can serve to help or hinder student engagement and learning. The conceptualisation of ‘assessment systems’ and the empirical validation of underlying assumptions from a student’s perspective is a useful basis for guiding change processes in higher education. Thus, for example, Gibbs and Simpson’s framework has been employed to provide the evidence-base in collaborative action research projects to improve assessment practices (Mc Dowell, et al., 2008). Similarly, Carless (2007) reports the use of a learning–oriented assessment framework to inform action learning at both the institutional and course level.

Broader Research Traditions

What can we learn about good assessment practice from broader research traditions? There is considerable convergent support from studies of psychological needs, cognitive load and wellbeing, for the importance of both appreciating students’ key role in constructing the meaning or ‘making sense’ of their experiences with assessment and actively incorporating this into design and management processes.

Students’ psychological needs may be particularly influential on their approach to assessment tasks. Cognitive evaluation theory proposes that people appraise events in terms of the extent to which their needs to feel competent and in control will be met (Deci & Ryan, 2002). From this perspective, events or tasks that positively influence student’s perceived sense of competence will enhance their motivation. Legault, Green-Demers, & Pelletier (2006) identified four dimensions of academic motivation (student’s ability and effort beliefs, characteristics of the task and value placed on the task) and found that in a particular context these may interact to produce either high levels of motivation or feelings of general helplessness. Clearly, a student’s appraisal of an assessment task and what it may demand/require from them in terms of effort and ability appear to be important considerations in how they will both engage (or detach) and approach its performance. This of course raises the question: What are the characteristics of assessment design that optimally support a students’ sense of competence and autonomy? Certainly, how the assessment process is managed may be salient to student’s efficacy and approach, as constructive task-related interpersonal support has been found to encourage self-determined motivation (Hardre & Reeve, 2003). Given that students look to information provided from both the task and their teachers to affirm their academic capability, appropriately matching task demands to student capabilities may be a key design consideration.

Cognitive load theory also provides insights into the processes that are salient to students’ experience of assessment tasks, in terms of both their efficiency and effectiveness, and their appropriateness to learners’ levels of expertise. From this perspective, it is not just students’ performance outcome on an assessment (viz., their grade) that is of interest, but also the degree of cognitive effort required. Students’ cognitive load typically derives from two sources, the intrinsic load of a learning task (usually the level of interactivity or complexity of its constituent elements) and any extraneous load placed on the student as a result of its management (e.g., poor timing and clarity of scaffolding information to learners and mismatched instructional procedures) (Kalyuga, 2011). Importantly, since load is additive, assessment tasks with higher intrinsic load are more likely to be negatively affected by process factors which increase extraneous load (Pass, Renkl & Sweller, 2003). The question of ‘managing cognitive load’ may be particularly important for novice learners who lack the working schemas to integrate new knowledge. Tasks that require students to find or construct essential information for themselves (unguided or minimal guidance) have been found to result in less direct learning and knowledge transfer compared to tasks where scaffolded guidance is provided (Kirschner, Sweller & Clark, 2006).

The goal of protecting or enhancing student well-being may also be relevant to our management of the assessment process. University students generally report lower levels of well-being than the general population, with first-year being a time of heightened anxiety (Cooke, Bewick, Barkham, Bradley & Audin, 2006). Given the identified vulnerabilities of this population, there may be a significant ethical and mental health dimensions to good assessment practice. Work stress and well-being can be influenced by the psychosocial system within which a person functions and effective person-system interaction is commonly conceptualised as the necessary balance of the factors of reasonable job or task demand, necessary support and opportunities for control and positive working environment (Kanji & Chopra, 2009). Whether high demands result in positive activation or strain will depend on the presence of appropriate support and felt control (Barnes & Van Dyne, 2009; Karasek, 1979). These findings can be readily generalised to the leadership of learning environments, with implications for both the design of assessment tasks and the support of students through the performance process.

First-Year Students

The present study is particularly concerned with the experience of first-year or commencing students’ experiences of assessment. Early academic experiences have been identified as critical to the formation of tentative learner identities and self-efficacy (Christie, et al., 2008). Clearly, ‘assessment backwash’ (Biggs, 1996), whereby badly designed or poorly organised assessment can unintentionally impair students’ learning, is more likely with a commencing student population. Indeed poorly matched and managed assessment is arguably a major contributor to the phenomenon of premature ‘student burnout’ (viz., feelings of exhaustion and incompetence and a sense of cynicism and detachment) (Schaufeli, et al., 2002) and disengagement. From a positive perspective, first-year learning environments potentially provide our greatest opportunities to not only align our educational intentions and impact, but also, to work collaboratively with new students to develop an evidence-based culture of success. An empirically supported understanding of the design and process elements of our ‘assessment systems’ can potentially make a contribution to the important challenges of student engagement and retention.