Though rating scales are usually preferred for sensory measurement, rank ordering is often cheaper, faster and easier, particularly when the number of objects ranked is between 5 and 10, and the raters are not highly trained. Ranking also has the advantage of removing the effect of judge severity, while permitting judge ranking patterns to be compared for quality control.
Ranking has its disadvantages. It is difficult to combine data from different rankings, and the information contained in the data is limited. It is also awkward to reanalyze rankings in order to investigate a different hypothesis.
Since few human judges can compare 5 objects simultaneously, the analysis of rank order data has been burdened with the need to model a selection mechanism (e.g., as a series of paired comparisons) into the observation model. In practice, when results depend on the intricacies of an often unconscious ranking procedure, ranking can become too fragile a basis for substantive conclusions.
To check the robustness of ranked results, we compared the location statistic used by Kruskal-Wallis (K-W) and a many-facet Rasch procedure. K-W uses the sum of the ranks given to an object as the basic statistic, and tests for global differences with sigma**2. The many-facet Rasch procedure models the ranks as qualitatively-ordered categories which have one observation per category per ranking. A simple Rasch model for complete rankings without ties is
loge(Pnj/Pn(j-1)) = Bn - Fj j=1,m-1
where Bn is the measure of object n, and Fj is the step measure up from a rank of j+1 to j. A sufficient statistic for Bn is the K-W sum of ranks.
An experiment was conducted in which 5 test materials, A-E, and a standard reference material, REF, were ranked 16 times. The K-W and many-facet results are compared in the plot.
Both methods show the Reference material to be located higher than the test materials. The Rasch method provides standard errors from which we can infer that the Reference material is significantly better than the best test material. The relationship between the Rasch measures and the summed ranks is close to linear with the curvature of the logistic ogive only in evidence for the highly ranked REF material. Nevertheless, this curvature raises the measure of the REF material noticeably - an important consideration when these measures are used in a cost vs. quality analysis.
Rasch, unlike K-W, also provides quality-control fit statistics for the rankings, and consistency statistics for the objects being ordered. Test material A was the most consistently ordered, showing that its placement as the best test material is generally agreed. Test material D was the least consistently ordered. Further investigation may discover something about material D that appeals to certain judges.
Rasch analysis also identifies quirks in the data. The most unexpected observation is a ranking of 4th for the Reference material by one judge. What motivated this idiosyncratic ranking? Does it indicate an opportunity for further improvement?
The similarity of the meaning of the location estimates for K-W and Rasch is reassuring to the practitioner. But Rasch provides additional valuable insight easily overlooked by the harried analyst.
Ranks in sensory measurement. Rehfeldt TK. Rasch Measurement Transactions, 1994, 8:2 p.368
Please help with Standard Dataset 4: Andrich Rating Scale Model
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|May 26 - June 23, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 30 - July 29, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|July 31 - Aug. 3, 2017, Mon.-Thurs.||Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br|
|Aug. 7-9, 2017, Mon-Wed.||In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia|
|Aug. 7-9, 2017, Mon-Wed.||PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/|
|Aug. 10, 2017, Thurs.||In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm|
|Aug. 11 - Sept. 8, 2017, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Aug. 18-21, 2017, Fri.-Mon.||IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org|
|Sept. 15-16, 2017, Fri.-Sat.||IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm|
|Oct. 13 - Nov. 10, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 5 - Feb. 2, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 10-16, 2018, Wed.-Tues.||In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement|
|Jan. 17-19, 2018, Wed.-Fri.||Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website|
|April 13-17, 2018, Fri.-Tues.||AERA, New York, NY, www.aera.net|
|May 25 - June 22, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 27, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 10 - Sept. 7, 2018, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 12 - Nov. 9, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|The HTML to add "Coming Rasch-related Events" to your webpage is:|
The URL of this page is www.rasch.org/rmt/rmt82p.htm