One complaint sometimes heard is that the Rasch model is not general enough. Another is that it is not simple enough. These complaints are unintelligible unless simple and general are understood.
The term general refers to how fully a statistical model absorbs the details of the data. The more general the model, the more precisely it describes the data minutiae. The term simple refers to how mathematically uncomplicated a statistical model is. A statistical model that is simple and general is preferred to one that is complex and specific.
When a more general model fits a wider array of data, is it always preferable? No! Because it also accepts and describes more anomalies and inaccuracies. Ptolemy's geocentric equations were wonderfully general. Whenever a misfitting observation was encountered, a mathematical artifact, called an equant, was included to turn the unexpected data point into an expected one. "Copernicus, however, ridiculed those who used equants and other mathematical devices for the sole purpose of making the calculated positions of planets agree with those observed:They are just like someone including in a picture hands, feet, head, and other limbs from different places, well painted indeed, but not modelled from the same body, and not in the least matching each other, so that a monster would be produced from them rather than a man." (Ludlow 1983)
There is a paradox. As models becomes more general, their implications become more particular. Imagine a set of essay ratings. If all judges are modeled to use the rating scale in the same way, then we observe misfit for idiosyncratic judges, but we can predict how a new judge would use that scale. If all judges are modeled to use the scale in individual ways, then there is no judge misfit, but we have lost our ability to predict how a new judge would use the scale. The effect of using a more general model is that any particular data set can be described more exactly, but future data can be predicted less usefully. The more general model leads to less generalizable results.
One might expect the most mathematically simple model to be the most generalizable. Since the Rasch model is logit-linear, the simpler linear formulation embodied in the true score model should be better. In fact, the linear appearance of the true score model is deceptive because the data it models are not linear, but ordinal. The utility of a true score analysis depends on how closely the ordinal data happen to approach linearity this time. Over a limited range, thetrue score model can often provide as good a statistical description of a particular data set as the Rasch model, but without the Rasch model's incisive quality-control fit statistics. Now let's ask how do children's performances on a third grade arithmetic test compare with those on a second grade test? This problem is child's play for the Rasch model, but the lack of a unique method for equating true-score tests, even when tests share common items, exposes a serious flaw in its "simplicity".
The scientific aim is not merely to obtain the mathematically simplest model, nor the statistically most general one. What is required is the most generalizable model, i.e., the one that condenses the most useful information out of past data at the same time as most usefully predicting new data. Such a model also provides the closest control over the statistical quality of the data. Ptolemy's theory explained everything including inaccurate observations. Kepler's laws required Tycho Brahe's accurate astronomical observations. Both observational inaccuracy and conceptual anomaly(the orbit of Uranus) were detected by Kepler's laws.
The Rasch model is an idealization of reality that no empirical data set is expected to fit exactly, but all ordinal data are expected to fit usefully. The application of this idealization, however, exposes both inaccuracy and anomaly. An anomaly, such as a child who is prone to guess, is an instance for which the generalizing power of the model fails, but the model's diagnostic power succeeds. Since the motivation for the model is not data description, but prediction, anomalies do not invalidate the model. They may, however, weaken the validity of the model's generalizations for that particular child. Discovery, diagnosis and elimination of inaccuracies and anomalies from the data restore the model's predictive power, even for the guessing-prone child.
John Michael Linacre
Larry H. Ludlow (1983) The Analysis of Rasch Residuals. Unpublished doctoral dissertation. Univ. of Chicago.
Is Rasch General Enough? Linacre J. M. Rasch Measurement Transactions, 1997, 11:1 p. 555
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Oct. 4 - Nov. 8, 2024, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt111q.htm
Website: www.rasch.org/rmt/contents.htm