Scores, Reliabilities and Assumptions

Testing literature is rife with incomprehensible papers and reports whose aim seems not communication but obscurantism. Contributing to this sorry spectacle are the terms, "score", "reliability", and "assumption".

Call A Measure A "Measure" Not a "Score":
Even in our best writing we sometimes confuse ourselves and our readers by using the term "score" to convey two seriously different meanings.

The first meaning, sometimes specialized as "raw score", is useful. This "score" refers to a count of observed right answers, rating scale categories or partial credit steps. We count concrete events - however different in qualitative detail - as exchangeable replications of a single idea.

The second meaning, as a measure, is misleading. This version of "score" is often misused for awkward concoctions of raw counts which are mistaken for genuine measures. But when such "scores" are mistakenly thought to be measures and subjected to linear statistical analysis, the results are always wrong to some known extent.

We began moving away from nonlinear, test dependent, raw scores to their transformations into linear, test-free measures long ago. Our work is distinguished for the care we take to avoid the error of mistaking scores for measures. Why not be equally careful to use the noble term measure when we write and talk about the product of our analyses? Let's not hide our lovely light under that old decrepit barrel, the misleading term: "score"!

Think of "Measurement Error", Not "Reliability":
Test reliabilities are not useful indicators of the precision, accuracy or reproducibility of test measures. Reliabilities are sample specific and therefore limited as general characterizations of tests. They only tell how well a test worked on some particular past occasion with some particular past sample. Reliabilities are no more than bits of local history about "once upon a time" applications to long vanished samples.

The standard error of measurement (SEM), however, is sample-free and hence test specific. When sample and test mismatch, then the SEM's for that sample are larger than the SEM's for a sample which matches the test. But this variation of the SEM with test score extremeness is a fixed, sample-free property of the test and can be deduced precisely for any anticipated application. The test-specific pattern of SEM's specifies exactly how well the test can be expected to perform on any application to any sample - past, present or future.

"Specifications", Not "Assumptions":
Poor, weak, speculative "assumptions" have no useful place in discussions of models. "Assumptions" give a profoundly wrong impression about models and their use. "Assumptions", and the ever popular "violations" they lead to, make a model seem a helpless maid on a reckless blind date with dangerous data.

The purpose of a model is to enforce the discipline of a strong theory by applying the demanding and precisely expressed "specifications" the theory calls for. The scientific questions are:

NOT:"Does the model fit the data?"
"Is the model violated?"

BUT:"Can the data fit the model?"
"Are the data useful?"

The "specifications" of a model are its raison d'etre and its modus operandi. The scientific value of the Rasch model is what it specifies - and hence requires - for data. The Rasch model specifies that, for data to be useful for the construction of measurement, they must be collected and organized so that they can stochastically approximate:

a. a single invariant conjoint order of item and person parameters,
i.e. unidimensionality.
b. item and person parameter separability,
i.e. sample-free item calibration and test-free person measurement,
i.e. sufficient statistics.
c. local independence of the observations,
i.e. independence among the residual differences between the observed and estimated data.

Analysis of the fit of data to these specifications is the statistical device by which data are evaluated for their measurement potential - for their measurement validity. Only a model which implements well- defined intentions through its definitive "specifications" can show us which data can serve our purposes and contribute to knowledge and which data cannot.



Scores, Reliabilities and Assumptions, B Wright … Rasch Measurement Transactions, 1991, 5:3 p. 157-158


Please help with Standard Dataset 4: Andrich Rating Scale Model



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Oct. 25-27, 2017, Wed.-Fri. In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY, www.aera.net
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src="https://www.rasch.org/events.txt"></script>

 

The URL of this page is www.rasch.org/rmt/rmt53a.htm

Website: www.rasch.org/rmt/contents.htm