Judging behaviour and rater errors: an application of the many-facet rasch model

Of the potential sources of construct irrelevant variance or unwanted variability in performance assessment, those associated with raters have been found to be extensive, difficult to control, and impossible to eliminate. And as rater-related errors are non-trivial and threaten the validity of test...

Full description

Bibliographic Details
Main Author: Abu Kassim , Noor Lide
Format: Article
Language:English
Published: Universiti Kebangsaan Malaysia (UKM) 2011
Subjects:
Online Access:http://irep.iium.edu.my/13789/
http://irep.iium.edu.my/13789/
http://irep.iium.edu.my/13789/7/pp179_197.pdf
Description
Summary:Of the potential sources of construct irrelevant variance or unwanted variability in performance assessment, those associated with raters have been found to be extensive, difficult to control, and impossible to eliminate. And as rater-related errors are non-trivial and threaten the validity of test results, it is necessary that these errors are accounted for and controlled in some way. This paper explains the different types of rater errors and illustrates how they can be identified using the Many-facet Rasch Model, as implemented by FACETS. It also demonstrates what these errors mean in terms of actual judging or rating behaviour and elucidates how they may affect the accuracy of estimation of performance. Rater errors that are explicated in this paper are those related to rater severity, restriction of range, central tendency, and internal consistency. As assessment and its procedures are central to student learning, matters related to valid and fair testing need to be taken seriously. It is hoped that with greater awareness of how we judge and a better understanding of how rater-related errors are introduced into the assessment process, we can be better raters and better teachers.