In Pursuit of Rasch Measurement - Explorations Following Michell

Michell (2002) concluded that "... in no area of traditional psychometrics is there yet any evidence that the relevant attributes are quantitative" - no data have yet been shown to be consistent with either the monotonic or Rasch theories. Michell's argument that leads to this conclusion can be summarized as follows:

1) Premise. One can not measure a psychological attribute using any specific procedure (conjoint measurement theory or otherwise) until one first obtains satisfactory evidence that the attribute we wish to measure is measurable, i.e., quantitative.

2) Premise. If measurement is to be done according to conjoint measurement and item response theory principles, then satisfactory evidence that the attribute is quantitative requires evidence that a data matrix satisfies all orders of cancellation conditions. (He points out that satisfying only single cancellation, which is entailed by monotonic but crossing item characteristic curves, ICC's, is evidence for ordinal theory; satisfying single, double and all higher cancellation conditions, which is entailed by monotonic non-crossing ICC's, is evidence for monotonic theory and for a quantitative attribute; satisfying all cancellation conditions plus having logistic ICC's is evidence for the Rasch theory.)

3) Evidence that a data matrix satisfies all cancellation conditions must include statistical tests that can be shown to be sensitive to higher order cancellation conditions ("that such tests are capable of discriminating between monotone theory and ordinal theory").

4) Michell claims that no such evidence that any data structure satisfies more than single cancellation conditions has yet been presented.

5) Therefore, Michell concludes, no psychological attribute has yet been shown to be quantitative and measurable.

In fact, an even stronger conclusion can be drawn if in place of 4) and 5) we substitute:

4b) Any proposed measurement procedure will fail fit tests of the cancellation conditions if a statistical test of sufficient power is provided (i.e., if a large enough data matrix is provided) since realistically any point null hypothesis is false. For example, there is no reason to expect that any behavior is affected by only one trait with the influence of all other traits exactly zero for all individuals across all test items.

5b) Therefore, no psychological attribute can ever be shown to be measurable.

Thus, Michell's premises lead not just to refutation of claims that quantitative measurement of psychological attributes has been achieved - they entail a refutation of the possibility that such measurement can ever be achieved. On first reading, Michell's argument seems to mean 'Abandon hope all ye Rasch believers trying to enter the Hell of psychological traits'. While angels may wisely fear to tread here, I see two possible paths.

I. We compromise our high principles in order to get into Hell by following the "good enough" approach (cf. Serlin & Lapsley, 1993, based on work of Lakatos).

In this approach we accept that attributes (data generators) may not be perfectly quantitative, and/or that we can not perfectly measure them, but we propose that they are quantitative and measurable enough for practical purposes. In this case we modify Michell's second premise to allow a wider range of "satisfactory evidence that an attribute is quantitative."

Consider the following. Newtonian mechanics and optics theories have been shown to fail fit tests - observations deviate from the predictions of these theories when sufficiently powerful tests are conducted. Nevertheless, for most practical purposes these theories predict a close approximation to data - and quite useful bridges and telescopes can be built using them.

One could quite reasonably hold that the relation between cognitive structure, including attributes, and behavior is sufficiently complex that no theory relating the two is likely to be complete enough that real data will satisfy all possible fit tests even if the attributes are measurable. The task of psychologists is to steadily improve understanding of the cognitive structure/behavior relationship - to be able to build useful bridges and telescopes. Some reasonable amount of misfit, e.g., within interval null hypotheses, is accepted - a theory can be held until the data diverge "too much" for practical utility or until a better theory is proposed. Only if the data deviate more than this criterion would the theory of quantitative measurement of an attribute be rejected. Michell's requirement that tests of higher order cancellation conditions be provided is still relevant, but some amount of misfit must be accepted. Studies of the implications of particular amounts or types of misfit on quality of measurement would also be relevant.

II. We give up trying to enter Hell and instead try to create a Heaven by following the model approach.

In this approach we do not attempt to directly measure psychological attributes and do not postulate that the attribute as it "really exists" is strictly quantitative. Instead we simply define a quantitative attribute, an ideal attribute, to exist in our model of the individual. In this approach the quantitative nature of this variable is not an empirical question - it is not open to disconfirmation by data - and Michell's entire argument does not apply.

Next we propose a particular measurement procedure, e.g., a test and Rasch analysis. A Rasch analysis of data then provides quantitative measures of the ideal attribute. Again, the issue of whether these measurements are quantitative or not can not be empirically challenged. The analysis necessarily, by virtue of the characteristics of the Rasch model, provides a quantitative measure. The empirically testable questions concern how good the ideal attribute and measurement thereof are. For example, we could set a criterion such as that the idealized model must explain 90% of the real data variance. We are not concerned whether the 10% misfit also involves failure of higher order cancellation conditions since we did not originally assert that the data generator was strictly quantitative, only that it is a close enough approximation that the analysis can extract a quantitative component.

This process would be analogous to proposing that some physical process, say the vibration of piano strings, could be modeled as a pure sine wave. The actual generator may have non-linear components, but we go ahead and fit the noisy data to a sine wave. The sine wave frequency measurements are quantitative. If we can say that, 90% of the vibration energy in a set of data can be accounted for as sine waves, then our idealized model of strings may be useful (even though we could disprove the theory that the real piano strings generate only pure sine waves). With time we may be able improve our model by explaining the remaining 10% in terms of other quantitative or non-quantitative (e.g., non-linear) components.

In summary, one can accept Michell's claim that no psychological attribute has yet been proven to be measured and even perhaps, in the absolute sense, that no attribute can in principle be proven to be measurable. This would accord with the general view that no scientific theory can be proven, only disproved. Thus, when a Rasch analysis is said to satisfy fit criteria, we should be sensitive to Michell's argument and not uncritically conclude that 'the results indicate that the attribute is quantitative and can be measured by the test in question'. Instead we could say, for example, either (Hell, approach 1) 'within Z degree of accuracy, these data are consistent with the predictions of the theory that attribute X is quantitative and measurable by test Y with Rasch scaling'; or (Heaven, approach 2) 'attribute X in our quantitative model as measured by test Y with Rasch scaling was able to account for Z amount of the variance in behavior'. In either case, the more important issues will concern how well these putative measurements of the attribute, whether conceptualized as real or modeled, relate to other behaviors and other theories.

Roger. E. Graves, University of Victoria

Michell, J. (2002). Conjoint Measurement & the Rasch Model: Quantitative versus Ordinal Structure. Paper presented at the International Objective Measurement Workshop, New Orleans, LA, 6 April 2002.

Serlin, R. C., & Lapsley, D. K. (1993). Rational appraisal of psychological research and the good-enough principle. In: G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences: methodological issues. Hillsdale, NJ: Erlbaum


In Pursuit of Rasch Measurement - Explorations Following Michell. R. Graves. … Rasch Measurement Transactions, 2003, 17:1, 914-915.



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt171k.htm

Website: www.rasch.org/rmt/contents.htm