Effective data analysis begins with the realization that neither the model nor the data are true. Both are inventions and fictions. The purpose of a model is (1) to represent important requirements in a decisive way (the more decisive the better) and (2) to be useful for the construction of knowledge. Throughout physics, chemistry, business and sports, a central requirement for a measurement model is that, as far as the model is concerned, comparisons of objects are independent of which agents are used, and comparisons of agents are independent of which objects are used. The model which operates in this manner to construct quantities from ordered qualities is the Rasch model.
The Rasch model is a conceptual ideal. No data can ever fit this (or any other) model. But the degree to which the data do cooperate with the model, and the local departures of the data from the ideal of the model, guide the management of the data. If at all possible, neither the data nor the model is accepted or rejected totally. Since the model represents our requirements, to reject it is to reject our line of inquiry. Since the data set contains what the world has to tell us, to reject it is to state that the world, or, at least that part of it, has no information relating to our concern. Very rarely, a variable is so poorly conceptualized, or a data set so confused, that no meaningful analysis can be performed. Then it is the data set, not the model, that is rejected.
The measurement model provides local indicators, fit statistics, of the degree to which the data is cooperating with the model's requirements. These local indicators identify pieces of data consistent with the main pattern, and also those least consistent. Those data consistent with the main pattern, usually the vast majority for any data set collected with any degree of thought and care, become the basis for measures and the construction of knowledge. The inconsistent data become a source of further discovery. They need to be studied in more detail, and, when the inconsistency is due to irrelevant activity, those pieces of data need to be put to one side or manipulated to be in accord with the main pattern.
When diagnosis of inconsistent data indicates that an examinee has a spoiled performance, (perhaps there is clear evidence of guessing, carelessness or such like,) the examinee's measure is also spoiled. To draw inferences from this measure about the ability of the examinee is to risk being misinformed. Sometimes the measure can be rescued by putting to one side the spoiled part of the data, and using only the part that cooperates with the model. Sometimes there is nothing to rescue. The data provides no firm base from which to infer the examinee's ability. Of course, if the purpose of the analysis is merely to award a prize for the most "correctly" marked squares on an answer sheet, even coffee stains, smudges and doodles count! But that is not science.
When an item has not functioned in the same way as most other items, it is not acting as an agent assisting in the measurement of the same variable. It is acting like a broken ruler. It is contributing not to measurement, but to noise, until either it is removed from the analysis, or, preferably mended. A common form of repair is to correct an errant scoring key.
What is the precise role of fit statistics in identifying spoiled performances or broken items? Is accurate knowledge of the distributions of the fit statistics for unspoiled performances, the "null" distributions, necessary for data analysis to proceed fairly, rationally, systematically, routinely and automatically? Automatically?? Empirical data are too complex in their origin for any automatic decision maker to be entirely successful. Fit statistics provide guidance, but not decisions. Decisions must be made within a larger context and for richer reasons than the null distribution of a fit statistic can provide. A substantive explanation of data identified as irregular is more important than the absolute magnitude of significance level of some fit statistic, chosen purely for its ability to aid the hunt for inconsistency.
Even were the null distribution available, it too is an ideal from which any real set of data inevitably falls short. Therefore, it is sufficient for thoughtful data analysis to know that the fit statistics can be relied upon (1) to order the persons from most underfitting to most overfitting, (2) to center this ordering near some easily remembered numerical value, such as 0 or 1, as a guide to where the transition is from under to overfitting and (3) to provide some frame of reference, so that a fit statistic of value, say 2, continues to represent about the same degree of spoiling for similar data collected under similar conditions. Whatever the values of the fit statistics, extreme cases require examination to discover why they are spoiled, or to confirm their freedom from spoilage.
Data analysis and fit. Wright BD. Rasch Measurement Transactions 1994 7:4 p.324
Data analysis and fit. Wright BD. Rasch Measurement Transactions, 1994, 7:4 p.324
Please help with Standard Dataset 4: Andrich Rating Scale Model
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|July 31 - Aug. 3, 2017, Mon.-Thurs.||Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br|
|Aug. 7-9, 2017, Mon-Wed.||In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia|
|Aug. 7-9, 2017, Mon-Wed.||PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/|
|Aug. 10, 2017, Thurs.||In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm|
|Aug. 11 - Sept. 8, 2017, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Aug. 18-21, 2017, Fri.-Mon.||IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org|
|Sept. 15-16, 2017, Fri.-Sat.||IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm|
|Oct. 13 - Nov. 10, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Oct. 25-27, 2017, Wed.-Fri.||In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement|
|Jan. 5 - Feb. 2, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 10-16, 2018, Wed.-Tues.||In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement|
|Jan. 17-19, 2018, Wed.-Fri.||Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website|
|April 13-17, 2018, Fri.-Tues.||AERA, New York, NY, www.aera.net|
|May 25 - June 22, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 27, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 10 - Sept. 7, 2018, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 12 - Nov. 9, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|The HTML to add "Coming Rasch-related Events" to your webpage is:|
The URL of this page is www.rasch.org/rmt/rmt74f.htm