Testing literature is rife with incomprehensible papers and reports whose aim seems not communication but obscurantism. Contributing to this sorry spectacle are the terms, "score", "reliability", and "assumption".
Call A Measure A "Measure" Not a "Score":
Even in our best writing we sometimes confuse ourselves and our
readers by using the term "score" to convey two seriously different
meanings.
The first meaning, sometimes specialized as "raw score", is useful. This "score" refers to a count of observed right answers, rating scale categories or partial credit steps. We count concrete events - however different in qualitative detail - as exchangeable replications of a single idea.
The second meaning, as a measure, is misleading. This version of "score" is often misused for awkward concoctions of raw counts which are mistaken for genuine measures. But when such "scores" are mistakenly thought to be measures and subjected to linear statistical analysis, the results are always wrong to some known extent.
We began moving away from nonlinear, test dependent, raw scores to their transformations into linear, test-free measures long ago. Our work is distinguished for the care we take to avoid the error of mistaking scores for measures. Why not be equally careful to use the noble term measure when we write and talk about the product of our analyses? Let's not hide our lovely light under that old decrepit barrel, the misleading term: "score"!
Think of "Measurement Error", Not "Reliability":
Test reliabilities are not useful indicators of the precision,
accuracy or reproducibility of test measures. Reliabilities are
sample specific and therefore limited as general characterizations of
tests. They only tell how well a test worked on some particular past
occasion with some particular past sample. Reliabilities are no more
than bits of local history about "once upon a time" applications to
long vanished samples.
The standard error of measurement (SEM), however, is sample-free and hence test specific. When sample and test mismatch, then the SEM's for that sample are larger than the SEM's for a sample which matches the test. But this variation of the SEM with test score extremeness is a fixed, sample-free property of the test and can be deduced precisely for any anticipated application. The test-specific pattern of SEM's specifies exactly how well the test can be expected to perform on any application to any sample - past, present or future.
"Specifications", Not "Assumptions":
Poor, weak, speculative "assumptions" have no useful place in
discussions of models. "Assumptions" give a profoundly wrong
impression about models and their use. "Assumptions", and the ever
popular "violations" they lead to, make a model seem a helpless maid
on a reckless blind date with dangerous data.
The purpose of a model is to enforce the discipline of a strong theory by applying the demanding and precisely expressed "specifications" the theory calls for. The scientific questions are:
NOT:"Does the model fit the data?"
"Is the model violated?"
BUT:"Can the data fit the model?"
"Are the data useful?"
The "specifications" of a model are its raison d'etre and its modus operandi. The scientific value of the Rasch model is what it specifies - and hence requires - for data. The Rasch model specifies that, for data to be useful for the construction of measurement, they must be collected and organized so that they can stochastically approximate:
a. a single invariant conjoint order of item and person parameters,
i.e. unidimensionality.
b. item and person parameter separability,
i.e. sample-free item calibration and test-free person measurement,
i.e. sufficient statistics.
c. local independence of the observations,
i.e. independence among the residual differences between the observed
and estimated data.
Analysis of the fit of data to these specifications is the statistical device by which data are evaluated for their measurement potential - for their measurement validity. Only a model which implements well- defined intentions through its definitive "specifications" can show us which data can serve our purposes and contribute to knowledge and which data cannot.
Scores, Reliabilities and Assumptions, B Wright Rasch Measurement Transactions, 1991, 5:3 p. 157-158
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Oct. 4 - Nov. 8, 2024, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt53a.htm
Website: www.rasch.org/rmt/contents.htm