Assessment as a tool for learning

Maureen Morris

University of Wollongong

Anne Porter

University of Wollongong

David Griffiths

University of Wollongong

Abstract: Assessment should provide a catalyst for student learning and for reflective teaching practices. Fundamental to the development of appropriate assessment must be a direct link between what is being “taught” and what is being “learned”. Both teacher and student must be able to identify this link. The teacher needs to ensure the relevance and validity of the task in terms of both the instructional process and subject objectives. Student learning results in the construction of new knowledge through a process that assembles personally identified content and skills. Subsequent assessment may be facilitated by classification of learning outcomes.

The use of learning taxonomies for instructional design has long been explored in high schools, but there exists little documentation of their use as tools for devising assessment that aims to promote learning. This study is an exploration of such use in an introductory statistics course at the University of Wollongong.

Bloom’s Taxonomy of Educational Objectives aids in the definition and classification of expected learning in terms of observable student behaviours. Its categories present these behaviours described in terms of knowledge and a set of hierarchical skills. More recent taxonomies have isolated knowledge from cognitive processing skills to generate a two-dimensional cross-classification of knowledge and skills. Through the application of one such taxonomy, task objectives and questions have been deconstructed in terms of the requisite knowledge and skills. They have also been aligned with instruction and used to design marking frameworks that organise students’ responses and provide patterns for recording student achievement.

Keywords: assessment, teaching, learning, objectives, alignment, instruction.

The aims of higher education are primarily to promote the growth and development of individuals through active engagement in the intellectual challenge of an academic discipline. Concurrently it should encourage the acquisition of appropriate ethical, community and professional standards and an appreciation of global diversity. (University of Wollongong, 2003) If teaching and learning at such institutions is to reflect these lofty aims, subjects and their assessment processes should be developed to this end. An understanding of the diversity of our students needs to be built into these processes, in a cultural sense and also in terms of their ‘preparedness’ for our courses.

If we are to maximize their potential, our teaching and assessment should aim to offer ‘equal opportunity’ to all students, irrespective of their backgrounds or abilities. This can be achieved if the focus is upon providing subjects which aim at developing skills (in particular, higher order skills) through the medium of their content, and through assessment in a current and multicultural context. Once the focus is removed from content to skill there is a greater urgency to ensure alignment of instruction with intended outcomes that we wish to observe in our students. (Avery 1999; Davies and Wavering; 1999; Evans 1999) Subsequent assessment of these desired outcomes requires definition of the types of behaviours in our students that demonstrate acquisition of these skills.

The purposes of assessment are multifold. (Ruff 1996) From the teaching perspective, it returns feedback on the success/failure of our teaching objectives. It can provide a focus for teaching strategies by defining the learning ‘behaviours’ we wish to observe. (Davies and Wavering; 1999; Evans 1999) Appropriately designed assessment also affords accountability for recognition of student achievement. (Black 1998) From a student’s perspective, assessment also forms a focal point for learning. It can provide a clear definition of the content and skills they are required to demonstrate, and can supply ‘organisational models’ for their learning through a statement of objectives and/or provision of scoring rubrics. (Montgomery 2002) It also supplies feedback to students on deficits in learning, and has the potential to redirect or extend students appropriately. (Hattie and Jaeger; 1998)

The key to ensuring the success of assessment in fulfilling all these purposes is alignment. There needs to be clear definition of the objectives of teaching, and of the intended learning outcomes. (Evans 1999) These objectives/outcomes must align with those assessed, and students need to be aware of them in order to focus on their achievement. Under regulations governing assessment in NSW high schools, such focus is currently provided to senior students. While our Universities have often invoked similar regulatory guidelines, close matching of subject objectives, assessment task requirements and desired outcomes does not commonly occur. Nor is it common for students to be made aware of such detailed matching of their expected learning behaviours. Consideration needs to be given to what mechanisms exist which can facilitate alignment of all aspects of our instruction.

Bloom’s Taxonomy of Educational Objectives aids in the definition and classification of expected learning in terms of observable student behaviours. Its categories present these behaviours described in terms of types of knowledge and a set of hierarchical cognitive processing skills. (Bloom 1974) More recent taxonomies have isolated the types of knowledge from the cognitive processing skills to generate a two-dimensional cross-classification. (Anderson, Krathwohl et al. 2001) (See Table 1) To ensure alignment, instructional objectives and those of assessment can then be analysed in terms of a knowledge/skills matrix. Representation of the much desired, higher order knowledge and skills in the matrix can also be identified at a glance. Numbered objectives or question numbers from the assessment are recorded in the appropriate cells.

Table 1. Two-Dimensional Cross-Classification of Types of Knowledge by Cognitive Processing Skill

Knowledge Dimension /

Cognitive Processes Dimension

Remember /

Understand

/

Apply

/

Analyse

/

Evaluate

/

Create

Factual

Conceptual

Procedural

Meta-cognitive

(Anderson, Krathwohl et al. 2001)

To facilitate analysis using the taxonomy, sub-types of knowledge and sub-classifications of skills can be defined in terms of behavioural vocabulary and examples.(Anderson, Krathwohl et al. 2001) The process can be tedious but careful alignment should enable near-optimal response from students. To minimize misinterpretation, tasks should always be aligned with language used in all instructional material available to students.

In common with other demanding ‘service’ subjects, pass rates and satisfaction ratings for introductory statistics subjects at university tend to be lower than desired. Students often cite complexity and perceived irrelevance as contributory causes. This study has been undertaken in an introductory course in statistics at the University of Wollongong to determine if assessment can be used as a driver to improve both teaching and learning. The tasks were chosen as reflecting the interests and areas of study of the enrolled students. The aim has been to enable students to focus on desired learning outcomes in their responses through the use of clearly defined objectives for each assessment task, alignment of the task questions with the task objectives and instruction (both strategies and discipline content), and the provision of a scoring rubric. No marks were included on the rubric to prevent students ascribing levels of importance to the knowledge/skills commensurate with their relative weightings. If all skills are considered important then this might have generated misconceptions and unequal effort in attempts to achieve.

The marking criteria have been designed so that student scores can be recorded against skills. Hence these scores can also be aggregated in the appropriate cells in the matrix. This affords opportunities for comparison of proportional representations of knowledge/skill cross-classifications across tasks. Such comparative analysis could be done either for cohorts of students or across semesters as an evaluative process for teaching. While in the above context an aggregate score has little value, for the purposes of a final mark (and grade) an aggregate of each student’s scores was formed.

Students’ attitudes to all aspects have been surveyed through structured responses in an online evaluation survey for the subject. Students from all levels of achievement have also been canvassed for participation in a less structured individual interview concerning their perceptions of the subject and their learning. Markers and tutors are also being surveyed as to their perceptions of student learning.

The processes of task analysis have been undertaken throughout the semester and this has proved a time consuming exercise in a period where little time can be spared. However if the assessment for the subject were pre-designed, then it should promote greater alignment of instruction and task. It would facilitate clear re-statement of subject objectives (not cosmetic reconstruction of those inherited) and lessen the pressure to allow pedagogy to be dictated by content descriptions.

The use of the revised taxonomy (Anderson, Krathwohl et al. 2001), has not only proved time consuming, but less reliable than desired. Classification of the desired knowledge/skills has been found to vary with individuals. A clearer definition of desired outcomes in terms of the ‘statistical experts’ might facilitate the development of an equivalent taxonomy of knowledge/skills appropriate to that discipline. In turn, this should enhance the reliability of the classifications in the analyses. To achieve this end, it would be necessary to deconstruct the knowledge and skills of the experts.

Anderson, L. W., D. R. Krathwohl, et al., Eds. (2001). A Taxonomy for Learning, Teaching, and Assessing A Revision of Bloom's Taxonomy of Educational Objectives. New York. U.S.A., Addison Wesley Longman Inc.

Avery, P. G. (1999). "Authentic assessment and instruction." Social Education 63(6): 368-373.

Black, P. J. (1998). Testing: Friend or Foe? Theory and Practice of Assessment and Testing. London, Falmer Press.

Bloom, B. S., Ed. (1974). Taxonomy of Educational Objectives

Handbook 1: Cognitive Domain. New York, David McKay Company Inc.

Davies, M. A. and M. Wavering; (1999). "Alternative Assessment: New directions in teaching and learning." Contemporary Education 71(1): 39-.

Evans, C. (1999). "Improving test practices to require and evaluate higher levels of thinking." Education 119(4): 616-618.

Hattie, J. and R. Jaeger; (1998). "Assessment and classroom learning: A deductive approach." Assessment in Education 5(1): 111-122.

Montgomery, K. (2002). "Authentic tasks and rubrics: Going beyond traditional assessments in college teaching." College Teaching 50(1): 34-.

Ruff, D. J. (1996). "Thinking in terms of feedback." New Schools, New Communities 12(2): 21-.

University of Wollongong, 2003. Strategic Plan 2002-2005 [online]. University of Wollongong. Available from:

http://www.uow.edu.au/about/stratplan/2002/