Rater Variability

"I have a question about the limits of variability in the difficulty or challenge posed by different elements of the facets analyzed in a performance. Let us say that a data set derived from a large-scale performance assessment that has the following characteristics:

1. 5,000+ examinees.

2. each of whom performs two tasks drawn at random from a pool of 20 speeches.

3. each examinee is rated on both tasks by a random pair of raters from a pool of 50.

4. each task is rated on 4 assessment items with a common 6-point scale.

"According to the results of a Rasch analysis,

the examinee logit ability spread is from -8 to +8 logits

the raters vary in harshness from -2 to +2 logits, with Infit Mean-Square between 0.7 and 1.5

the tasks range in difficulty from -0.5 to +0.5 logits with Infit MnSq between 0.9 and 1.2

the assessment items range in challenge from -1 to +1 logit, Infit MnSq between 0.8 and 1.2

"The biggest problem seems to be rater variability. Can Rasch analysis produce fair ability estimates with these large measure and fit variations?"

Tom Lumley, Hong Kong Poly University

In this example, variability in rater severity could be an asset. The range of task and item difficulties is small relative to the examinee range. The wide range of rater severity would cause candidates of the same ability to be evaluated against different levels of the rating scale producing both better examinee measures and better validation of rating scale functioning. As long as the raters are self-consistent (across time and across examinees), I can't imagine how variability in severity would ever be a problem.

The variation in your rater fit statistics indicates that some part of your data may be of doubtful quality. This could be due to raters with different rating styles (e.g., halo effect, extremism). If so, you can discover the amount of mis-measurement this causes by allowing each rater to define their own rating scale. The person measures from this model can then be compared to the shared rating scale model. I have a paper using these methods, "Unmodelled Rater Discrimination Error", given at IOMW, 1998.

Peter Congdon

Looking at your quality-control fit statistics, your tasks and items are performing well. Raters with noisy fit statistics near 1.5 are problematic. Perhaps the misfitting raters encountered idiosyncratic examinees. Drop idiosyncratic examinees and judges from the data temporarily. Analyze the remaining examinees, items, tasks and raters. Verify that the 6 category rating scale is working as intended for all items, tasks and raters by allowing each of these in turn to have their own rating scale definitions. Finally anchor all measures at their most defensible values and reintroduce dropped examinees and judges for the final measurement report.

John Michael Linacre

Rater Variability Lumley T, Congdon P., Linacre J. … Rasch Measurement Transactions, 1999, 12:4 p.

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):


ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
June 23 - July 21, 2023, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 11 - Sept. 8, 2023, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com


The URL of this page is www.rasch.org/rmt/rmt124f.htm

Website: www.rasch.org/rmt/contents.htm