Differential Rater Functioning

Monitoring the quality of ratings obtained within the context of rater-mediated assessments is of major importance (Engelhard, 2002). One of the areas of concern is differential rater functioning (DRF). DRF focuses on whether or not raters show evidence of exercising differential severity/leniency when rating students within different subgroups. For example, a rater my rate male students' essays (or female students' essays) more severely or leniently than expected. Ideally, each rater's level of severity/leniency should be invariant across gender subgroups. Residual analyses of raters flagged with DRF can be used to provide a detailed exploration of potential rater biases, and they can also form the basis for conducting mixed-methods study (Creswell & Plano-Clark, 2007).

In order to illustrate the use of residual analyses to examine DRF, data from (Engelhard & Myford, 2003) are used. The purpose of the original study was to examine the rating behavior of raters who scored essays written for the Advanced Placement® English Literature and Composition (AP ELC) exam. Data from the 1999 AP ELC exam were analyzed using the FACETS model. One of the sections of this report focused on DRF among raters scoring the AP ELC exam.

A rater x student gender bias analysis was conducted to determine whether or not raters were rating essays composed by male and female students in a similar fashion. Were there raters who were more prone to gender bias than other raters? The FACETS analyses identified 18 raters that, based on statistical criteria, may have exhibited DRF related to student gender.


Table 1. Summary of Differential Rater Functioning Statistics (Student-Gender Interactions) for Rater 108


* |Z|≥=2.00


Based on the overall fit statistics (INFIT MNSQ = 1.1, OUTFIT MNSQ = 1.1), Rater 108 did not appear to be rating in an unusual fashion. However, when the interaction between rater and student gender is specifically examined, Table 1, a different story emerges. Rater 108 tended to rate the male students' essays higher on average (5.33) than expected (4.56). For females, the observed average (4.83) is less than the expected average (5.13). In summary, there is a statistically significant gender-difference in the rater's severity (z = 2.39).



Figure 1. Rater 108's rating profile


Figure 1 shows that Rater 108 assigned higher-than-expected ratings to 8 of the 9 male students' essays, but lower than expected ratings to 13 of the 23 female students' essays. This highlights the importance of exploring not only mean differences between observed and expected ratings within each subgroup category but also the variability and spread of residuals within subgroups. Ultimately, DRF involves looking at discrepancies between observed and expected ratings at the individual level. As pointed out many years ago by Wright (1984, p. 285),

"bias found for groups is never uniformly present among members of the groups or uniformly absent among those not in the group. For the analysis of item bias to do individuals any good, say, by removing the bias from their measures, it will have to be done on the individual level."

In rater-mediated assessments, it is very important to conduct group-level analyses of DRF, but use caution if routine statistical adjustments are made for rater severity. The full interpretation of these effects require a detailed examination of residuals for each rater. Using a mixed-methods framework, suspect raters that can then be investigated in more detail using case studies and other qualitative analyses.

George Engelhard, Jr., Emory University

Creswell J.W. & Plano-Clark V.L. (2007). Designing and conducting mixed methods research. Sage.

Engelhard, G. (2002). Monitoring raters in performance assessments. In G. Tindal and T. Haladyna (Eds.), Large-scale Assessment Programs for ALL Students: Development, Implementation, and Analysis, (pp. 261-287). Mahwah, NJ: Erlbaum.

Engelhard, G, & Myford, C.M. (2003). Monitoring rater performance in the Advanced Placement English Literature and Composition Program with a many-faceted Rasch model. NY: College Entrance Examination Board. http://professionals.collegeboard.com/research/pdf/cbresearchreport20031_22204.pdf

Wright, B.D. (1984). Despair and hope for educational measurement. Contemporary Education Review, 3(1), 281-285. www.rasch.org/memo41.htm



Differential Rater Functioning. … George Engelhard, Jr., Rasch Measurement Transactions, 2008, 21:3 p. 1124

Please help with Standard Dataset 4: Andrich Rating Scale Model



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Oct. 25-27, 2017, Wed.-Fri. In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY, www.aera.net
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src="https://www.rasch.org/events.txt"></script>

 

The URL of this page is www.rasch.org/rmt/rmt213f.htm

Website: www.rasch.org/rmt/contents.htm