Rasch Models for Measurement in Educational and
Education Research and Perspectives. Vol. 9, No. 1 June 1982
This special issue is devoted to Rasch models for measurement: models which are being applied more and more frequently in studies involving quantitative educational and psychological research. One of the key features of these models is that they emphasize the study of individual persons rather than populations. Another is that when data accord with models, they permit the measurement of each person to be independent of the particular test questions chosen from a well defined class of questions. Early studies of these models emphasized the procedures for estimating parameters of the simplest model appropriate for dichotomously scored test questions.
Recently, emphasis has turned; firstly, to generalizing the principles of the simple model to models appropriate for more complex data collection designs; and secondly, to examining methods for checking the accordance or fit between the chosen model and the observed data.
One paper which deals with models appropriate for responses in multiple categories is that by Masters and Wright. In that paper they outline the various steps required in defining a variable from the perspective of Rasch models. Latimer applies a Rasch model to check hypotheses regarding the psychological processes involved in reading comprehension. The model he applies uses only dichotomously scored responses, but the difficulties of the reading tasks are taken to be made up of various combinations of difficulties of only four more elementary tasks.
The paper by Kissane offers new insights into the controversial topic of the measurement of change. He argues that the main emphasis in the study of change should be on the study of the rate of change, and that as a result, at least three measures across time are required. He also demonstrates how the principles of the Rasch models can be applied to the study of the rate of change.
Andrich's paper shows how the concept of reliability, found in traditional classical test theory (CTT), can be accommodated within the framework of Rasch models, and how this framework helps one appreciate more clearly the uses and the limitations of the traditional KR-20 index of internal consistency.
The other four papers deal explicitly with tests of fit between the model and the data. Consistent with the emphasis on persons, a feature of the analyses of test data from the perspective of Rasch models, is the study of the internal consistency of the responses of each person, that is, with the study of 'person-fit'. Smith and Hedges provide evidence that some of the more frequently used tests of fit associated with items can be applied equally well to persons. Douglas reinforces this point with his comments on the definitive position of the residual between the observed response of each person to each item and that predicted from the model. In Bell's paper, statistics for item-fit and person-fit are shown to be symmetrical, and these are related respectively to the ideas of item discrimination and person reliability. The paper shows explicitly that the values of the person-fit indices are correlated highly with the values of the person reliabilities. Rost's paper demonstrates that, for most practical purposes, the indices of fit using the numerically more simple unconditional likelihood ratio statistics are just as powerful as those obtained from the more complex conditional likelihood ratio tests. Rennie's research note also deals with fit, but not from a straightforward statistical approach. She shows how an examination of the asymmetry of parameter estimates, which may reasonably be expected to be symmetrical, reveal certain response sets.
Important principles in choosing a Rasch model are brought into focus by the fact that these papers deal with both the generalization of the simplest of Rasch models for measurement, and the tests of fit between empirical data and the chosen model. Firstly, while models may be postulated for more complex data collection designs than those involving a dichotomous response, the elaborations retain the separation of the person parameters from the item or question parameters. Thus the elaboration of models is not simply an ad hoc exercise designed to improve the modelling of the data. Secondly, the emphasis on checking the way the data and the chosen model might not accord with one another demonstrates a concern for understanding the principles behind the data. The papers in this issue show that the approach to test construction and analysis, for both applied and research purposes, involves a constant interplay between these strong models for measurement and the data collection designs which are employed.
David Andrich and Graham Douglas, Guest Editors
Education Research and Perspectives. Vol. 9, No. 1 June 1982, 5-6
Reproduced with permission of The Editors, The Graduate School of Education, The University of Western Australia. (Clive Whitehead, Oct. 29, 2002)
Go to Top of Page
Go to Institute for Objective Measurement Page
|Rasch-Related Resources: Rasch Measurement YouTube Channel
|Rasch Measurement Transactions & Rasch Measurement research papers - free
|An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse
|Rasch Measurement Theory Analysis in R, Wind, Hua
|Applying the Rasch Model in Social Sciences Using R, Lamprianou
|Journal of Applied Measurement
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar
|Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch
|Rasch Models for Measurement, David Andrich
|Constructing Measures, Mark Wilson
|Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
|Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias
|Diseño de Mejores Pruebas - free, Spanish Best Test Design
|A Course in Rasch Measurement Theory, Andrich, Marais
|Rasch Models in Health, Christensen, Kreiner, Mesba
|Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen
|Rasch Books and Publications: Winsteps and Facets
|Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene
|Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver
|Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone
|Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale
|Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
|Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes
|Rasch Models for Solving Measurement Problems (Facets), George Engelhard, Jr. & Jue Wang
|Statistical Analyses for Language Testers (Facets), Rita Green
|Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind
|Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M
|Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind
|Rasch Measurement: Applications, Khine
|Winsteps Tutorials - free
Facets Tutorials - free
|Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre
|Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan
|Rasch Measurement Forum to discuss any Rasch-related topic
|Coming Rasch-related Events
|Oct. 6 - Nov. 3, 2023, Fri.-Fri.
|On-line workshop: Rasch Measurement - Core Topics (E. Smith, Facets), www.statistics.com
|Oct. 12, 2023, Thursday 5 to 7 pm Colombian time
|On-line workshop: Deconstruyendo el concepto de validez y Discusiones sobre estimaciones de confiabilidad SICAPSI (J. Escobar, C.Pardo) www.colpsic.org.co
|June 12 - 14, 2024, Wed.-Fri.
|1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
|Aug. 9 - Sept. 6, 2024, Fri.-Fri.
|On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Our current URL is www.rasch.org
The URL of this page is www.rasch.org/erp1.htm