Testing literature is rife with incomprehensible papers and reports whose aim seems not communication but obscurantism. Contributing to this sorry spectacle are the terms, "score", "reliability", and "assumption".
Call A Measure A "Measure" Not a "Score":
Even in our best writing we sometimes confuse ourselves and our readers by using the term "score" to convey two seriously different meanings.
The first meaning, sometimes specialized as "raw score", is useful. This "score" refers to a count of observed right answers, rating scale categories or partial credit steps. We count concrete events - however different in qualitative detail - as exchangeable replications of a single idea.
The second meaning, as a measure, is misleading. This version of "score" is often misused for awkward concoctions of raw counts which are mistaken for genuine measures. But when such "scores" are mistakenly thought to be measures and subjected to linear statistical analysis, the results are always wrong to some known extent.
We began moving away from nonlinear, test dependent, raw scores to their transformations into linear, test-free measures long ago. Our work is distinguished for the care we take to avoid the error of mistaking scores for measures. Why not be equally careful to use the noble term measure when we write and talk about the product of our analyses? Let's not hide our lovely light under that old decrepit barrel, the misleading term: "score"!
Think of "Measurement Error", Not "Reliability":
Test reliabilities are not useful indicators of the precision, accuracy or reproducibility of test measures. Reliabilities are sample specific and therefore limited as general characterizations of tests. They only tell how well a test worked on some particular past occasion with some particular past sample. Reliabilities are no more than bits of local history about "once upon a time" applications to long vanished samples.
The standard error of measurement (SEM), however, is sample-free and hence test specific. When sample and test mismatch, then the SEM's for that sample are larger than the SEM's for a sample which matches the test. But this variation of the SEM with test score extremeness is a fixed, sample-free property of the test and can be deduced precisely for any anticipated application. The test-specific pattern of SEM's specifies exactly how well the test can be expected to perform on any application to any sample - past, present or future.
"Specifications", Not "Assumptions":
Poor, weak, speculative "assumptions" have no useful place in discussions of models. "Assumptions" give a profoundly wrong impression about models and their use. "Assumptions", and the ever popular "violations" they lead to, make a model seem a helpless maid on a reckless blind date with dangerous data.
The purpose of a model is to enforce the discipline of a strong theory by applying the demanding and precisely expressed "specifications" the theory calls for. The scientific questions are:
NOT:"Does the model fit the data?"
"Is the model violated?"
BUT:"Can the data fit the model?"
"Are the data useful?"
The "specifications" of a model are its raison d'etre and its modus operandi. The scientific value of the Rasch model is what it specifies - and hence requires - for data. The Rasch model specifies that, for data to be useful for the construction of measurement, they must be collected and organized so that they can stochastically approximate:
a. a single invariant conjoint order of item and person parameters,
b. item and person parameter separability,
i.e. sample-free item calibration and test-free person measurement,
i.e. sufficient statistics.
c. local independence of the observations,
i.e. independence among the residual differences between the observed and estimated data.
Analysis of the fit of data to these specifications is the statistical device by which data are evaluated for their measurement potential - for their measurement validity. Only a model which implements well- defined intentions through its definitive "specifications" can show us which data can serve our purposes and contribute to knowledge and which data cannot.
Scores, Reliabilities and Assumptions, B Wright Rasch Measurement Transactions, 1991, 5:3 p. 157-158
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|Aug. 14 - 16, 2019. Wed.-Fri.||An Introduction to Rasch Measurement: Theory and Applications (workshop led by Richard M. Smith) https://www.hkr.se/pmhealth2019rs|
|August 25-30, 2019, Sun.-Fri.||Pacific Rim Objective Measurement Society (PROMS) 2019, Surabaya, Indonesia https://proms.promsociety.org/2019/|
|Oct. 11 - Nov. 8, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Nov. 3 - Nov. 4, 2019, Sun.-Mon.||International Outcome Measurement Conference, Chicago, IL,http://jampress.org/iomc2019.htm|
|Jan. 24 - Feb. 21, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|May 22 - June 19, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 26 - July 24, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 7 - Sept. 4, 2020, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 9 - Nov. 6, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 25 - July 23, 2021, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
The URL of this page is www.rasch.org/rmt/rmt53a.htm