"My question has to do with the Rasch person separation reliability.
(1) Can you tell me how it is calculated?
I've noticed that sometimes the Rasch-based reliability is essentially identical to Cronbach's alpha and sometimes it isn't.
(2) Are there limitations on how Rasch separation reliability is to be interpreted?
This arises because with alpha it is necessary that the measures be independent. For example, if two raters rate a group of examinees on five tasks (so that I have ten data points for each examinee, two per task), I will need to sum or average the ratings within task. If I use all ten data points to calculate alpha, it is likely to be substantially inflated."
Cronbach's alpha, KR-20, and the separation reliability coefficients reported in a Rasch context are all estimates of the ratio of "true measure variance" to "observed measure variance".
For all these methods, the basic underlying relationship is specified to be:
For Cronbach's alpha, computed from non-linear raw scores, an estimating equation is:
where k is the number of observations per examinee, σ² is the raw score variance across examinees, and σi² is the raw score variance for observation i across examinees. Generalizability Theory addresses the situation in which every rater does not rate every examinee on every item and task. Extreme scores are usually included. Since extreme scores have no score error variance, their effect is to increase the reported reliability.
For Rasch separation reliability, computed from linear measures, an estimating equation for N examinees is
Extreme scores are usually excluded, because their measure standard errors are infinite.
There is much more at RMT 11:3 KR-20 / Cronbach Alpha or Rasch Reliability: Which Tells the "Truth"?
Both of these estimation methods disregard covariance between raters, items, tasks, etc. But some covariance always exists. Usually not enough to merit special attention. Suppose, however, that your raters are not acting as independent experts, but rather as "rating machines". Then using two or three raters would be the same as running an MCQ form through two or three optical scanners. There would be near-perfect covariance between the raters. Under these conditions, more raters, just like more optical scanners, would not increase test reliability.
If you suspect rater covariance, you could obtain a lower bound for the separation reliability by estimating the reliability as if there were only one rater per subject:
where R is the reported reliability and N is the number of raters rating each examinee.
For instance, if the reported separation reliability with 5 raters is 0.83, and you suspect that raters are being forced into agreement, then a more reasonable separation reliability is that with one rater:
Relating Cronbach and Rasch Reliabilities Clauser B., Linacre J.M. Rasch Measurement Transactions, 1999, 13:2 p. 696
Please help with Standard Dataset 4: Andrich Rating Scale Model
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|May 26 - June 23, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 30 - July 29, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|July 31 - Aug. 3, 2017, Mon.-Thurs.||Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br|
|Aug. 7-9, 2017, Mon-Wed.||In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia|
|Aug. 7-9, 2017, Mon-Wed.||PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/|
|Aug. 10, 2017, Thurs.||In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm|
|Aug. 11 - Sept. 8, 2017, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Aug. 18-21, 2017, Fri.-Mon.||IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org|
|Sept. 15-16, 2017, Fri.-Sat.||IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm|
|Oct. 13 - Nov. 10, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 5 - Feb. 2, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 10-16, 2018, Wed.-Tues.||In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement|
|Jan. 17-19, 2018, Wed.-Fri.||Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website|
|April 13-17, 2018, Fri.-Tues.||AERA, New York, NY, www.aera.net|
|May 25 - June 22, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 27, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 10 - Sept. 7, 2018, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 12 - Nov. 9, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|The HTML to add "Coming Rasch-related Events" to your webpage is:|
The URL of this page is www.rasch.org/rmt/rmt132i.htm