Rasch Models for Measurement in Educational and
Education Research and Perspectives. Vol. 9, No. 1 June 1982
This special issue is devoted to Rasch models for measurement: models which are being applied more and more frequently in studies involving quantitative educational and psychological research. One of the key features of these models is that they emphasize the study of individual persons rather than populations. Another is that when data accord with models, they permit the measurement of each person to be independent of the particular test questions chosen from a well defined class of questions. Early studies of these models emphasized the procedures for estimating parameters of the simplest model appropriate for dichotomously scored test questions.
Recently, emphasis has turned; firstly, to generalizing the principles of the simple model to models appropriate for more complex data collection designs; and secondly, to examining methods for checking the accordance or fit between the chosen model and the observed data.
One paper which deals with models appropriate for responses in multiple categories is that by Masters and Wright. In that paper they outline the various steps required in defining a variable from the perspective of Rasch models. Latimer applies a Rasch model to check hypotheses regarding the psychological processes involved in reading comprehension. The model he applies uses only dichotomously scored responses, but the difficulties of the reading tasks are taken to be made up of various combinations of difficulties of only four more elementary tasks.
The paper by Kissane offers new insights into the controversial topic of the measurement of change. He argues that the main emphasis in the study of change should be on the study of the rate of change, and that as a result, at least three measures across time are required. He also demonstrates how the principles of the Rasch models can be applied to the study of the rate of change.
Andrich's paper shows how the concept of reliability, found in traditional classical test theory (CTT), can be accommodated within the framework of Rasch models, and how this framework helps one appreciate more clearly the uses and the limitations of the traditional KR-20 index of internal consistency.
The other four papers deal explicitly with tests of fit between the model and the data. Consistent with the emphasis on persons, a feature of the analyses of test data from the perspective of Rasch models, is the study of the internal consistency of the responses of each person, that is, with the study of 'person-fit'. Smith and Hedges provide evidence that some of the more frequently used tests of fit associated with items can be applied equally well to persons. Douglas reinforces this point with his comments on the definitive position of the residual between the observed response of each person to each item and that predicted from the model. In Bell's paper, statistics for item-fit and person-fit are shown to be symmetrical, and these are related respectively to the ideas of item discrimination and person reliability. The paper shows explicitly that the values of the person-fit indices are correlated highly with the values of the person reliabilities. Rost's paper demonstrates that, for most practical purposes, the indices of fit using the numerically more simple unconditional likelihood ratio statistics are just as powerful as those obtained from the more complex conditional likelihood ratio tests. Rennie's research note also deals with fit, but not from a straightforward statistical approach. She shows how an examination of the asymmetry of parameter estimates, which may reasonably be expected to be symmetrical, reveal certain response sets.
Important principles in choosing a Rasch model are brought into focus by the fact that these papers deal with both the generalization of the simplest of Rasch models for measurement, and the tests of fit between empirical data and the chosen model. Firstly, while models may be postulated for more complex data collection designs than those involving a dichotomous response, the elaborations retain the separation of the person parameters from the item or question parameters. Thus the elaboration of models is not simply an ad hoc exercise designed to improve the modelling of the data. Secondly, the emphasis on checking the way the data and the chosen model might not accord with one another demonstrates a concern for understanding the principles behind the data. The papers in this issue show that the approach to test construction and analysis, for both applied and research purposes, involves a constant interplay between these strong models for measurement and the data collection designs which are employed.
David Andrich and Graham Douglas, Guest Editors
Education Research and Perspectives. Vol. 9, No. 1 June 1982, 5-6
Reproduced with permission of The Editors, The Graduate School of Education, The University of Western Australia. (Clive Whitehead, Oct. 29, 2002)
Go to Top of Page
Go to Institute for Objective Measurement Page
Please help with Standard Dataset 4: Andrich Rating Scale Model
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|FORUM||Rasch Measurement Forum to discuss any Rasch-related topic|
|Coming Rasch-related Events|
|Sept. 15-16, 2017, Fri.-Sat.||IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm|
|Oct. 13 - Nov. 10, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Oct. 25-27, 2017, Wed.-Fri.||In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement|
|Jan. 5 - Feb. 2, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 10-16, 2018, Wed.-Tues.||In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement|
|Jan. 17-19, 2018, Wed.-Fri.||Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website|
|April 13-17, 2018, Fri.-Tues.||AERA, New York, NY, www.aera.net|
|May 25 - June 22, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 27, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 10 - Sept. 7, 2018, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 12 - Nov. 9, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|The HTML to add "Coming Rasch-related Events" to your webpage is:|
Our current URL is www.rasch.org
The URL of this page is www.rasch.org/erp1.htm