A continuing practical problem in rating performances is eliminating ambiguity introduced by deficient judging plans. A recent conference aired data in this form:
Judge A Judge B Items: P Q R P Q R Person: Task X 101 2 2 1 3 1 2 102 3 2 2 103 2 3 2 2 3 2 104 4 3 3 3 3 3 Task Y 201 3 2 3 202 3 3 3 2 2 2 203 4 3 3 204 4 4 3 3 3 4
At first glance, all seems well. The three items, P, Q, R, can be in one frame of reference, because they share the same judge-person-task combinations. The two judges, A, B, can be in the same frame of reference, because they rate every second person together. Now comes the problem. The persons seem to share the same frame of reference because so many of them are rated on the same tasks. But there are two tasks. Why are the four 100-group people rated lower on Task X than the four 200-group people on Task Y? Are the 100-group people less able than the 200-group? Is Task X is harder than Task Y? These data cannot say which!
Resolving this ambiguity requires perception and decision. The first step is to notice the problem. If you detect it during data collection, a slight change to the judging plan can remedy the situation. For instance, some people could be asked to perform both tasks. Nevertheless, continue to be on the look out for this ambiguity during analysis. Many statistical procedures fail to report it, and so may produce misleading results.
There are only two choices for resolving this issue: either the tasks are said to be alike or the people are said to be alike. If Task X and Task Y were intended to have the same difficulty, and that still seems a reasonable assertion, then anchor them together at the same calibration. This resolves the ambiguity, and interprets the overall score difference between the 100-group and the 200-group of persons as a difference in ability levels.
On the other hand, you may have intended that the tasks be different by an amount unknown as yet. Then the only solution is to treat the two groups of persons as though they estimate the same mean ability. Specify the analysis to set the mean ability level of the 100-group at the same value as the mean ability level of the 200-group. Now the overall score difference between the 100-group and the 200-group will express a difference in difficulty between Task X and Task Y.
This type of ambiguity is common. It is seen in data where supervisors rate their own trainees. Do good ratings signify good trainees or a lenient rater? It is wise to try both options. First analyze the data as though the supervisors rated equally strictly, then analyze the data as though the groups of trainees were of the same average ability. A comparison of the two analyses frequently implies that the trainees are rather similar (particularly if they are at the end of regular training), but that the supervisors' rating styles are rather different. Who your supervisor is may turn out to be more important than what you accomplish.
Juggling Judging Ambiguity, J Linacre Rasch Measurement Transactions, 1991, 5:3 p. 167
Rasch Publications | ||||
---|---|---|---|---|
Rasch Measurement Transactions (free, online) | Rasch Measurement research papers (free, online) | Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch | Applying the Rasch Model 3rd. Ed., Bond & Fox | Best Test Design, Wright & Stone |
Rating Scale Analysis, Wright & Masters | Introduction to Rasch Measurement, E. Smith & R. Smith | Introduction to Many-Facet Rasch Measurement, Thomas Eckes | Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. | Statistical Analyses for Language Testers, Rita Green |
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar | Journal of Applied Measurement | Rasch models for measurement, David Andrich | Constructing Measures, Mark Wilson | Rasch Analysis in the Human Sciences, Boone, Stave, Yale |
in Spanish: | Análisis de Rasch para todos, Agustín Tristán | Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez |
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
June 23 - July 21, 2023, Fri.-Fri. | On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com |
Aug. 11 - Sept. 8, 2023, Fri.-Fri. | On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt53h.htm
Website: www.rasch.org/rmt/contents.htm