Inter-rater Reliability

Inter-rater reliability is the degree to which an assessment tool produces stable and consistent results; the extent to which 2 or more raters agree. The reliability depends upon the raters to be consistent in their evaluation of behaviors or skills. The raters must have unbiased measurements of student’s competency and address the consistency of the implementation of evaluation systems being utilized. 

No comments:

Post a Comment