14 Rubric ‘Design Elements’

Design element / References / Sample rubric /
Specificity: the particular object of assessment / Tierney and Simon (2004): generic rubrics vs task-specific
Dornisch and McLoughlin (2006): challenges of using non-task-specific rubrics from the web
Timmerman, Strickland, Johnson, and Payne (2010): example of a rubric to assess ‘scientific writing’ in general / Task-specific
Secrecy: who the rubric is shared with, and when it is shared / Torrance (2007): challenges of sharing criteria and different interpretations (not rubric-specific) / Shared with task description
Exemplars: work samples provided to illustrate quality / Tierney and Simon (2004): argues for providing exemplars with rubrics / One example of high-quality work was provided with a completed rubric
Scoring strategy: procedures used to arrive at marks and grades / Sadler (2009a): different types of scoring logic
Johnson, Penny, and Gordon (2000): score resolution when assessors disagree
Popham (1997): rubric definition mentions scoring strategies
Dimopoulos, Petropoulou, and Retalis (2013): use of computers in a scoring strategy / Analytic. Cumulative scoring logic to arrive at broad grade. Faculty policy required double-marking of fails.
Evaluative criteria: overall attributes required of the student / Popham (1997): rubric definition mentions evaluative criteria / Absent
Quality levels: the number and type of levels of quality / Sadler (2009b): mentions quality levels, noting that they need not be uniform across criteria
Fluckiger (2010): provides rationale for using just one quality level
Biggs and Tang (2007, p. 210): levels aligned with SOLO / Five levels corresponding to grade descriptors
Quality definitions: explanations of attributes of different levels of quality / Popham (1997): rubric definition mentions quality definitions
Sadler (2009b): notes terminology is not uniform around quality descriptors and criteria.
Tierney and Simon (2004): encourages consistency across levels / Present but inconsistent attributes across performance levels.
Judgement complexity: the evaluative expertise required of users of the rubric / Sadler (2009b): ‘qualitative judgements’ vs ‘analytic judgements’
Dimopoulos et al. (2013): computers making judgements in ‘learning analytics enriched rubrics’ / Moderate: mixture of analytic and qualitative judgements
Users and uses: who makes use of the rubric, and to what end / Nordrum, Evans, and Gustafsson (2013): teachers using rubrics to communicate feedback information
Panadero and Romero (2014); Andrade and Du (2005): particular student uses of rubrics
Dimopoulos et al. (2013): computers as users of rubrics / Teachers use for summative assessment; students use for planning and self-assessment; students use for formative peer assessment
Creators: the designers of the rubric / Andrade and Du (2005); Boud and Soler (2015): rubrics co-created by students and teachers
Timmerman et al. (2010): researchers creating a rubric / Teacher
Quality processes: approaches to ensure the reliability and validity of the rubric / Johnson et al. (2000): inter-rater reliability
Timmerman et al. (2010): example of rubric that has undergone reliability and validity testing / No formal quality processes. Informal refinement based on student feedback and performance.
Accompanying feedback information: comments, annotation, or other notes on student performance / Nordrum et al. (2013): compared rubric-articulated feedback with in-text commentary / In-class: rubric acts as a stimulus for peer feedback discussion.
Summative marking: rubric accompanied by narrative from marker, and in-text comments.
Presentation: how the information in the rubric is displayed / (e.g. Sadler, 2009a): usual presentation is a grid, table or matrix of text
Google Images (2015): a range of examples of how rubrics are presented / Paper-based table of text
Explanation: instructions or other additional information provided to users / Hafner and Hafner (2003): provided minimal instruction
Panadero and Romero (2014): more detailed instructions / Minimal: ‘Use this to self- and peer-assess. Submit a highlighted self-assessed copy.’

Sample rubric from postgraduate education subject

Fail / Pass
None of the Fail items, and all of the following / Credit
None of the Fail items, all of the Pass items and most of the following / Distinction
None of the Fail items, all of the Pass items, most of the Credit items, evidence of peer review of teaching, and most of the following / High Distinction
None of the Fail items, all of the Pass items, all of the Credit items, most of the distinction items, and most of the following
Insufficient data about teaching context to understand the rest of the assignment
Parts of the assignment are missing
Did not collect or source any feedback from students
Major ethical problems with data collection
Any evidence of academic integrity problems
Very little use of research literature
Substantial problems with clarity or referencing
/ Collected feedback from students
Clear documentation of feedback collection process
Unambiguous connection between feedback and plan for action
Targeted use of >=4 scholarly sources
/ Evidence of peer review by a colleague
Plan for action is well supported by scholarly literature
Substantial use of scholarly literature, including >=6 sources
No ethics problems
Based on the weight of evidence (feedback, literature, argument), self/peer marker expects action plan will work
If research from other education sectors (school/VET) is used, its applicability to higher education is considered carefully
/ Three or more sources of feedback
Mixture of qualitative and quantitative feedback sources
Consideration of the ethics of feedback about teaching
Connection to research literature about feedback
Synthesis of literature and feedback into connected themes
Some critique of feedback instruments or the role of feedback in evaluating teaching and learning
Use of existing feedback instruments, or clear justification for the development of new instruments
/ Four or more sources of feedback
Feedback provides input on learning, teaching, and the student experience
Particularly rigorous analysis methods
Connection to the research literature about ethics in feedback about teaching
Writing, methods and scholarship of similar quality to a higher education refereed conference paper

*Note that this isn’t necessarily a good rubric, but it’s based on a real rubric I designed years ago.

References and further reading

Andrade, H., & Du, Y. (2005). Student perspectives on rubric-referenced assessment. Practical Assessment, Research & Evaluation, 10(3), 1-11.

Biggs, J., & Tang, C. (2007). Teaching for quality learning at university. Maidenhead, Berkshire, England: Open University Press.

Boud, D., & Soler, R. (2015). Sustainable assessment revisited. Assessment & Evaluation in Higher Education. doi:10.1080/02602938.2015.1018133

Dimopoulos, I., Petropoulou, O., & Retalis, S. (2013). Assessing students' performance using the learning analytics enriched rubrics. Paper presented at the Proceedings of the Third International Conference on Learning Analytics and Knowledge, Leuven, Belgium.

Dornisch, M. M., & McLoughlin, A. S. (2006). Limitations of web-based rubric resources: Addressing the challenges. Practical Assessment, Research & Evaluation, 11(3), 1-8.

Fluckiger, J. (2010). Single point rubric: a tool for responsible student self-assessment. Delta Kappa Gamma Bulletin, 76(4), 18-25.

Google Images. (2015). assessment rubric - Google Search. Retrieved from https://www.google.com.au/search?q=assessment+rubric&espv=2&biw=1280&bih=1255&source=lnms&tbm=isch&sa=X&ved=0CAYQ_AUoAWoVChMI__mRkJKMxwIVxWGmCh010ARN

Hafner, J., & Hafner, P. (2003). Quantitative analysis of the rubric as an assessment tool: an empirical study of student peer‐group rating. International Journal of Science Education, 25(12), 1509-1528. doi:10.1080/0950069022000038268

Johnson, R. L., Penny, J., & Gordon, B. (2000). The Relation Between Score Resolution Methods and Interrater Reliability: An Empirical Study of an Analytic Scoring Rubric. Applied Measurement in Education, 13(2), 121-138. doi:10.1207/S15324818AME1302_1

Nordrum, L., Evans, K., & Gustafsson, M. (2013). Comparing student learning experiences of in-text commentary and rubric-articulated feedback: strategies for formative assessment. Assessment & Evaluation in Higher Education, 38(8), 919-940. doi:10.1080/02602938.2012.758229

Panadero, E., & Romero, M. (2014). To rubric or not to rubric? The effects of self-assessment on self-regulation, performance and self-efficacy. Assessment in Education: Principles, Policy & Practice, 21(2), 133-148. doi:10.1080/0969594X.2013.877872

Popham, W. J. (1997). What's wrong - and what's right - with rubrics. Educational Leadership(2), 72-75.

Sadler, D. R. (2009a). Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34(2), 159-179. doi:10.1080/02602930801956059

Sadler, D. R. (2009b). Transforming Holistic Assessment and Grading into a Vehicle for Complex Learning. In G. Joughin (Ed.), Assessment, Learning and Judgement in Higher Education (pp. 1-19): Springer Netherlands.

Tierney, R., & Simon, M. (2004). What's still wrong with rubrics: focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2), 1-10.

Timmerman, B. E. C., Strickland, D. C., Johnson, R. L., & Payne, J. R. (2010). Development of a ‘universal’ rubric for assessing undergraduates' scientific reasoning skills using scientific writing. Assessment & Evaluation in Higher Education, 36(5), 509-547. doi:10.1080/02602930903540991

Torrance, H. (2007). Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post‐secondary education and training can come to dominate learning. Assessment in Education: Principles, Policy & Practice, 14(3), 281-294. doi:10.1080/09695940701591867

This is an extract from an Author's Original Manuscript of an article whose final and definitive form, the Version of Record, has been published in Assessment & Evaluation in Higher Education, 2015 copyright Taylor & Francis:

Dawson, P. (2015). Assessment rubrics: towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education. doi:10.1080/02602938.2015.1111294. http://www.tandfonline.com/doi/full/10.1080/02602938.2015.1111294

Contact

Phillip Dawson, Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University

Email: ; Twitter: @phillipdawson; Web: http://philldawson.com