"Rescaling Ordinal Data to Interval Data" (Harwell MR & Gatti GG, Review of Educational Research, 2001, 71:1, 105-31) is honest, but misleading. The paper itself comprises two parts. First, a useful survey of the prevalence of, and problems with, using ordinal data in quantitative research. Second, two examples using IRT to rescale ordinal data, one an analysis of a real dichotomous dataset using BILOG, the other an analysis of simulated Graded Response data using MULTILOG.
The paper begins
"Many statistical procedures used in educational research are described as requiring that dependent variables follow a normal distribution, implying an interval scale of measurement. The advantage of an interval scale is that relative differences among values composing the scale are assumed to be equal in terms of what is measured, allowing arithmetic operations (e.g., addition, multiplication) to be used unambiguously" (Emphasis mine).
Certainly, normality requires linearity. A normal distribution only makes analytical sense if it is based on an underlying linear frame of reference. The paper helpfully provides supporting references to this (Guilford, 1954, p. 17; Gaito, 1959; Lord & Novick, 1968, p. 22). But the paper leaves the mistaken impression that linearity implies normality. Normality may be hypothesized to exist. But linearity itself is independent of any particular sample distribution.
A second misconception follows. Linearity is not a property that can be safely "assumed". No physicist, carpenter or cook would be so foolhardy as to merely "assume" the linearity of a measuring instrument. Usually there is evidence that a manufacturer has taken pains to construct linearity. Then the instrument must be used in such a way as to maintain its linearity. If linearity is in doubt, as when an instrument is damaged or of unknown provenance, its linearity is checked before it is used. Thus a linear scale must be constructed and then it can be tested. A further complication is that a scale that is linear for one purpose, e.g., time as expressing duration, may be non-linear for another, e.g., time as expressing running or swimming prowess.
Most IRT models concur with the implication that "normality
implies linearity". The sample is assumed, or rather asserted, to
have a normal distribution. This assertion is then imposed on the
analysis, and the resulting scale scores are declared to be
"linear". The paper honestly admits the difficulty of demonstrating
that such scale scores are, in fact, linear.
"Clearly, additional work is needed to demonstrate that the estimated proficiencies for a variety of IRT models and item types show an interval scale. One option is to follow Fischer's (1995) approach in which proficiencies under the Rasch model were proved to possess an interval scale . This is the most attractive approach, but such proofs are difficult beyond the case of the Rasch model for dichotomous responses. Alternatively, computer simulation studies could be performed ...." (p.127).
There are proofs for linear scaling with polytomous and other Rasch models (Andrich, 1977; Fischer, 1995; Linacre, 1989). A basic property of all Rasch models is separability of parameters, which is manifested statistically by each parameter having a sufficient statistic. From this basis, linearity can be constructed. But there are no proofs of linearity for non-Rasch IRT models, i.e., those without separability of parameters. And no amount of computer simulation will "turn a sow's ear into a silk purse!"
Harwell & Gatti's BILOG "Rasch" Example
The paper's idiosyncratic analysis of a real dichotomous dataset prompts a comment. 1,000 4th-grade students responded to 30 dichotomous items. Our authors must be congratulated for choosing to perform a Rasch analysis, even if their motivation lacks conviction: "we had no reason to believe that the items varied in discrimination or that guessing needed to be modeled." Thus BILOG was instructed to perform a "Rasch" analysis.
The reported results make most sense when interpreted with a
local scaling of 1 BILOG unit = 0.7 logits. But the paper's Figure
1 (reproduced here) shows a score range (4-20) that fails to
include all of those in its Table 3 (6-27). Figure 1 implies that
most item p-values are less than 0.5, but its Table 2 reports that
28 out of 30 p-values exceed 0.5. Further, Figure 1 and its
accompanying text explain how different response patterns, for the
same raw score, yield different person measures. This accords with
IRT scaling philosophy, but contradicts a basic tenet of Rasch
measurement -
"we may conclude that as far as the model goes [measures] should be estimated from the marginals ... only, while any further details about the structure of [the response matrix] is irrelevant for estimation - but of course not for controlling the model." (Rasch, 1980, p. 177. Italics his.)
Ben Wright (1977) remarked that "Progress marches on the invention of simple ways to handle complicated situations." As it stands, this paper makes the linearization of ordinal data, a complex but manageable problem, unintelligible.
John M. Linacre
Andrich D (1977) Summary Equations on Notes for a Rasch Model for Likert Scales. Paper presented at AERA. www.rasch.org/memo48.htm
Fischer G.H. (1995) Derivations of the Rasch model. & The derivation of polytomous Rasch models. In G.H. Fischer and I.W. Molenaar (Eds.) Rasch Models. New York: Spring-Verlag.
Gaito J (1959) Non-parametric methods in psychological research. Psychological Reports, 5, 115-125.
Guilford JP (1954) Psychometric Methods. 2nd Ed. New York: McGraw-Hill.
Linacre JM (1989) Many-facet Rasch Measurement. Chicago: MESA Press.
Lord FM & Novick MR (1968) Statistical theories of mental test scores. Reading MA: Addison-Wesley.
Wright BD (1977) Solving measurement problems with the Rasch model. Journal of Educational Measurement, 14(2), 97-116.
"Linear" Rescaling vs. Linear Measurement. Harwell MR, Gatti GG, Linacre, JM. … 16:3 p.890-1
"Linear" Rescaling vs. Linear Measurement. Harwell MR, Gatti GG, Linacre, JM. … Rasch Measurement Transactions, 2002, 16:3 p.890-1
Rasch Publications | ||||
---|---|---|---|---|
Rasch Measurement Transactions (free, online) | Rasch Measurement research papers (free, online) | Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch | Applying the Rasch Model 3rd. Ed., Bond & Fox | Best Test Design, Wright & Stone |
Rating Scale Analysis, Wright & Masters | Introduction to Rasch Measurement, E. Smith & R. Smith | Introduction to Many-Facet Rasch Measurement, Thomas Eckes | Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. | Statistical Analyses for Language Testers, Rita Green |
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar | Journal of Applied Measurement | Rasch models for measurement, David Andrich | Constructing Measures, Mark Wilson | Rasch Analysis in the Human Sciences, Boone, Stave, Yale |
in Spanish: | Análisis de Rasch para todos, Agustín Tristán | Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez |
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
June 23 - July 21, 2023, Fri.-Fri. | On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com |
Aug. 11 - Sept. 8, 2023, Fri.-Fri. | On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt163h.htm
Website: www.rasch.org/rmt/contents.htm