In many assessments, there are examinees who misbehave, and items that are poorly constructed. Nevertheless, everyone must be measured, and every item must be included except those that are obviously, blatantly faulty.
Blatantly faulty items are those that we can show to a content expert (who knows nothing about statistics) and say: "Do you see this ... (typographical error, ambiguity, scoring problem, irrelevant content, ... ). This item is obviously wrong or off-topic!"
Items with conspicuous DIF are more awkward to handle, and depend on the policy of the testing agency. It is easiest to treat them as blatantly faulty and omit them, but they can be split into separate items for separate DIF groups.
But what about random guessing, doubtful items and other problematic data? A three-stage estimation process provides a solution:
i) Analyze all the data. Identify problems.
ii) Reanalyze all the data, but with items and persons with misfit problems deleted and obviously errant or off-target responses omitted. This is the "good" dataset. Save the estimates of the item difficulties and Rasch-Andrich thresholds (for polytomies).
iii) Analyze all the data. Delete only obviously, blatantly faulty items. Anchor (fix) the "good" items at their "good" difficulties, and the polytomies at their "good" thresholds. Output the final set of person measures and item difficulties.
The measure for each person is now estimated in the frame-of-reference of the "good" data with the minimum of distortion of that measure by irrelevant (to that person) "bad" data.
If we have a timed test, and score all incorrect answers and all item-not-reached answers as "0", then the final items have few correct answers, "1", even if the very last item is the conceptually easiest item on the test.
To get around this problem we do the three-stage analysis. In the second stage, we use only data from examinees who have definitely reached an item (right or wrong). All unreached responses are coded "not administered" (e.g., M for missing) and excluded from the analysis. This analysis gives us the best estimates of the difficulties of the items. We save these "good" item difficulties.
In the third stage, we score all the data 0-1, but use the "good" item difficulties, so that the measures of students who responded to most of the items are not distorted by the performances of students who responded to fewer items.
John M. Linacre
Good Measures from Bad Data, J.M. Linacre ... Rasch Measurement Transactions, 2011, 24:4, 1313
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|Aug. 14 - 16, 2019. Wed.-Fri.||An Introduction to Rasch Measurement: Theory and Applications (workshop led by Richard M. Smith) https://www.hkr.se/pmhealth2019rs|
|August 25-30, 2019, Sun.-Fri.||Pacific Rim Objective Measurement Society (PROMS) 2019, Surabaya, Indonesia https://proms.promsociety.org/2019/|
|Oct. 11 - Nov. 8, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Nov. 3 - Nov. 4, 2019, Sun.-Mon.||International Outcome Measurement Conference, Chicago, IL,http://jampress.org/iomc2019.htm|
|Jan. 24 - Feb. 21, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|May 22 - June 19, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 26 - July 24, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 7 - Sept. 4, 2020, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 9 - Nov. 6, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 25 - July 23, 2021, Fri.-Fri.|
The URL of this page is www.rasch.org/rmt/rmt244m.htm