"To the extent that validation is properly characterized as involving interpretations and inferences about scores, clearly the properties of scores are central to validation. More specifically, it is the properties of scale scores [measures] that are of principal concern, because it is scale scores, not raw scores, that are used to make decisions. It is unfortunate that most of the literature about validation fails to make explicit this distinction. For the most part, it is scale scores, not raw scores, that are the lens through which test users indirectly observe the student behavior elicited by a test or assessment. As such, I believe that validation is clearly inadequate without clear explanations of scale scores, which necessarily involves well-reasoned defenses of the assumptions involved in scaling. Further, these explanations should be made in a manner that is understandable to test users and policy-makers. Test users should not be required or expected to blindly accept scaling results.
"It is eminently clear that scaling usually involves complex statistical operations. It is much less recognized that scaling often necessitates choosing among value-laden assumptions. Such choices should not be made in a psychometric vacuum; rather, they should be heavily informed by practice. I believe the role of scaling in drawing inferences about test scores is one of the most neglected aspects of validation, and the notion that scaling is (or should be) solely a psychometric matter may be the single most widely held misconception about measurement."
Robert L. Brennan (1998) Misconceptions at the intersection of measurement theory and practice. Educational Measurement: Issues and Practice. 17:1, 8.
"The invention of deliberately oversimplified theories is one of the major techniques of science, particularly of the
`exact' sciences, which make extensive use of mathematical analysis. If a biophysicist can usefully employ simplified
models of the cell and the cosmologist simplified models of the universe then we can reasonably expect that simplified
games may prove to be useful models for more complicated conflicts."
John Williams, The Compleat Strategyst. New York: McGraw Hill, 1954.
And we can also reasonably expect that simplified representations of complex interpersonal relationships, attitudes, abilities, performances, etc., will also prove useful, as has in fact been repeatedly demonstrated by Rasch measurement.William P. Fisher, Jr.
Quotations Florin R.E. Rasch Measurement Transactions, 2001, 15:2 p.822
Please help with Standard Dataset 4: Andrich Rating Scale Model
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|Sept. 15-16, 2017, Fri.-Sat.||IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm|
|Oct. 13 - Nov. 10, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Oct. 25-27, 2017, Wed.-Fri.||In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement|
|Jan. 5 - Feb. 2, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 10-16, 2018, Wed.-Tues.||In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement|
|Jan. 17-19, 2018, Wed.-Fri.||Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website|
|April 13-17, 2018, Fri.-Tues.||AERA, New York, NY, www.aera.net|
|May 25 - June 22, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 27, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 10 - Sept. 7, 2018, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 12 - Nov. 9, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|The HTML to add "Coming Rasch-related Events" to your webpage is:|
The URL of this page is www.rasch.org/rmt/rmt152h.htm