Between last year's test and this year's test, or the pre-test and the post-test, or any pair of tests, there have always been changes. Examinees have changed. Item difficulties have drifted or been modified by instructional effects. The raters have changed, even if minutely. The empirical definitions of the rating scale categories have altered. How can changes across time in examinee performance be investigated when everything else is simultaneously in flux?
Comparisons require a stable frame of reference. In order to compare performance across time, all other changes across time must be eliminated or controlled. There are several methods:
a) Assertion of constancy.
Most test items retain approximately stable difficulties over the testing period. For these items, the pre-test, post-test and pre-post joint calibrations are statistically stable. Their calibrations may be anchored (fixed) at whichever set of calibrations makes the most sense to the test consumers. Constancy may also be expressed in terms of groups of items or raters (group means) or demographic groups of examinees, for which individual fluctuation, but no overall shift, is asserted.
b) Assertion of difference.
Some items change difficulty noticeably from pre-test to post-test. Each of these items can be asserted to be acting like two different items, a pre-test item and a post-test item. The data set can be reformatted so that each of these items is split into two items. Responses to the pre-test item are missing for the post-test. Responses to the post-test item are missing for the pre-test.
c) Assertion of compromise.
Rating scales often present a challenge. Scale structure often changes from pre-test to post-test. Indeed, on the pre-test, the highest categories of the scale may not be observed, but on the post-test, the lowest categories may be missing. It is impossible to conceptualize, compare and communicate measures based on different rating scales, the pre-test and the post-test versions. Consequently,however much the scale structure may have changed, measures must be based on a compromise set of rating scale step calibrations obtained from a joint analysis of the pre-test and post-test data combined.
d) Assertion of meaning.
To an examinee unfamiliar with the Greek alphabet, all Greek words are exceedingly difficult. After learning the Greek alphabet, however, some words become easy, while others remain difficult. If the purpose of the test is to measure improvement in Greek reading comprehension, then the post-test item difficulties (which differentiate between easy and hard words) are more useful than the pre-test difficulties (in which the distinction between easy and hard words is muted). On the other hand, if instruction in, say, safety procedures, is intended to give examinees complete mastery of all tested material, then all items on post-test will be very easy for most examinees. Then it will be the set of pre-test calibrations that distinguish between new and familiar material.
In practice, there will be different, but equally-reasonable, assertions for establishing a stable frame of reference. There will also be different sets of assertions for examining performance improvement and for examining item difficulty drift. The criteria for choosing the definitive assertions are meaningfulness and ease of communication to the user of the results.
Benjamin D. Wright
Wright B.D. (1996) Comparisons require stability. Rasch Measurement Transactions 10:2 p. 506.
Comparisons require stability. Wright B.D. Rasch Measurement Transactions, 1996, 10:2 p. 506
Please help with Standard Dataset 4: Andrich Rating Scale Model
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|June 30 - July 29, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|July 31 - Aug. 3, 2017, Mon.-Thurs.||Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br|
|Aug. 7-9, 2017, Mon-Wed.||In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia|
|Aug. 7-9, 2017, Mon-Wed.||PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/|
|Aug. 10, 2017, Thurs.||In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm|
|Aug. 11 - Sept. 8, 2017, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Aug. 18-21, 2017, Fri.-Mon.||IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org|
|Sept. 15-16, 2017, Fri.-Sat.||IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm|
|Oct. 13 - Nov. 10, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 5 - Feb. 2, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 10-16, 2018, Wed.-Tues.||In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement|
|Jan. 17-19, 2018, Wed.-Fri.||Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website|
|April 13-17, 2018, Fri.-Tues.||AERA, New York, NY, www.aera.net|
|May 25 - June 22, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 27, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 10 - Sept. 7, 2018, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 12 - Nov. 9, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|The HTML to add "Coming Rasch-related Events" to your webpage is:|
The URL of this page is www.rasch.org/rmt/rmt102p.htm