Assess and give feedback to learners
Appropriate methods for teaching, learning and assessing in the subject area in the subject area and at the level of the academic programme
Peer marking is a form of assessment in which students are asked to evaluate the work of their peers. It can be used as either a formative or summative assessment tool in higher education, depending on the goals of the instructor.
As a formative assessment tool, peer marking can be used to help students identify their strengths and weaknesses and to receive feedback on their work. This can be particularly helpful for students who may be struggling with a particular concept or skill, as it allows them to receive feedback from their peers who may have a different perspective on the material. By reviewing the work of their peers, students can also learn from the successes and mistakes of others and apply this knowledge to their own work.
As a summative assessment tool, peer marking can be used to evaluate the final product of a student's work. This can be helpful for instructors who want to assess the ability of their students to apply their knowledge and skills to a particular task. By having students review and evaluate the work of their peers, instructors can get a sense of how well students are able to critically analyze and evaluate the quality of others' work.
Peer assessment involves students evaluating their peers, as well as being evaluated by their peers on formative and/or summative tasks. Students who participate in peer assessment perform better than those who don't. Peer marking has a small, significant effect on student learning. Importantly, peers' and teachers' ratings of students are comparable and training students how to mark is central to the success of this approach.
There are a number of ways to structure peer marking to ensure it's successful
We have mixed confidence about some of the findings, but there is 'good' quality evidence on this topic. Three meta-analyses inform this evidence summary. The two by Li and colleagues (2016; 2020) are not high quality. Both fail to report the risk of bias, the publication bias, and the reliability of the findings. However, the Double et al. (2020) meta-analysis is very well constructed. The only concern around quality for this paper is the high heterogeneity score, suggesting that other, as yet unexplored variables might be contributing to the main effect.
Double, K. S., McGrane, J. A., & Hopfenbeck, T. N. (2020). The impact of peer assessment on academic performance: a meta-analysis of control group studies. Educational Psychology Review, 32, 481-509. doi: 10.1007/s10648-019-09510-3.
Li, H., Xiong, Y., Hunter, C. V., Guo, X., & Tywoniw, R. (2020). Does peer assessment promote student learning? A meta-analysis. Assessment & Evaluation in Higher Education, 45(2), 193-211. doi: 10.1080/02602938.2019.1620679
Li, H., Xiong, Y., Zang, X., L. Kornhaber, M., Lyu, Y., Chung, K. S., & K. Suen, H. (2016). Peer assessment in the digital age: a meta-analysis comparing peer and teacher ratings. Assessment & Evaluation in Higher Education, 41(2), 245-264. doi: 10.1080/02602938.2014.999746.
https://www.teaching.unsw.edu.au/peer-assessment
https://isit.arts.ubc.ca/ideas-and-strategies-for-peer-assessments/