More objections have been raised to the application of the Rasch model to empirical data.
1. "The purpose of the Rasch model is to describe the data, so a poor fit of the Rasch model to the data invalidates the use of the Rasch model."
Describing the data is the purpose of many statistical models, such as regression models, but it is not the purpose for using the Rasch model. The purpose of the Rasch model is to use the data to construct additive measures on a latent variable. These measures may or may not be a good description of the data. For instance, if the data contain lucky guesses, the data will be intentionally badly described by a Rasch model. The lucky guesses will contradict the Rasch measures and be detected with misfit statistics. For more, see "Rasch model as Additive Conjoint Measurement" www.rasch.org/memo24.htm
2. "The Rasch-Andrich Rating-Scale model and the Rasch-Masters Partial Credit model assume that the respondent is making a series of consecutive choices between neighboring categories."
Those polytomous models specify that the respondent is making a choice from all categories simultaneously. Consecutive choices are specified in other models such as the Glas-Verhelst "Steps" ("Success") Model or the "Failure" model, see RMT 5:2, 155 www.rasch.org/rmt/52j.htm. However, experience indicates that even in situations where consecutive decisions are made, the Andrich or Masters models are often a better basis for measurement than consecutive-choice models. This may be because the respondent is aware of the other choices, even if they are not currently available for selection.
3. "Empirical items never measure in the same scale units. Real items have different discriminations. Consequently the Rasch model cannot be used."
This is true about real items, but not about the Rasch model. We do not need exact concordance between items, we need useable concordance. Then we need to be alerted to where the lack of concordance has become a threat to useful measurement. Rasch analysis constructs as-concordant-as-possible additive measures based on items with different scale units (discriminations). Rasch analysis then reports the degree of non-concordance of each item using misfit statistics. Items with exceedingly high or exceedingly low discrimination are usually defective items for other reasons, see RMT 7:2, 289 www.rasch.org/rmt/rmt72f.htm
4. "The responses by each respondent to each item must be independent for Rasch analysis to be successful."
The Rasch ideal is local independence. Each item has a difficulty, a location on the latent variable. Each respondent has an "ability", also a location on the same latent variable. A Rasch model predicts the expected response for each respondent to each item based on those locations. When the expected responses are subtracted from the observed responses, the resulting residuals are modeled to be independent. Of course, they never are! Again, misfit analysis comes to our rescue. Is the lack of local independence in the data sufficiently large and sufficiently pervasive to be a threat to the meaning of the additive measures? Experience indicates that thoughtfully-constructed instruments produce observations that are locally independent enough for the additive Rasch measures to be useful for inference.
5. "Rasch analysis can cause unidimensional data to appear multidimensional."
No empirical data are strictly unidimensional. Imagine a perfectly constructed test. Each item implements the intended unidimensional latent variable. But each item also differs from every other item. The ways in which two items differ from each other must be independent of any other item, otherwise they will be locally dependent. Thus each item must implement the intended dimension and also its own "difference" dimension, unique to the item, and uncorrelated with the "difference" dimension of any other item. Of course, empirical items fall short in both regards. They do not exactly implement the intended variable, and their "difference" dimensions are somewhat correlated with the "difference" dimensions of other items.
The choice of variant of the Rasch model, and other decisions made by the analyst, can alter the impact of the inherent multidimensionality of the items. For instance, if polytomous items are rescored as dichotomies, the choice of cut-point in the rating-scale may exacerbate or ameliorate the unwanted correlations in the data. Accordingly, the analyst must be aware of this and may adjust the scoring accordingly. See for instance "Communication validity and rating scales", RMT 10:1, 482 www.rasch.org/rmt/rmt101k.htm
6. "Factor Analysis of the original responses is more accurate for investigating possible multidimensionality than unidimensional Rasch analysis."
Factor analysis (FA) can report too many factors, RMT 8:1, 347, www.rasch.org/rmt/rmt81p.htm. But let us consider a practical situation, suppose that FA reports one substantial factor in the inter-item correlation matrix (according to Kaiser's rule or whatever), but the Rasch analysis (PCA of residuals) reports that there is a sizable secondary dimension in the inter-item correlation matrix of the Rasch residuals (or vice-versa). Which is correct?
An obvious solution is to split the set of items into two subsets based on their dimensionality in the analysis which reports two possible dimensions. Then cross-plot the person raw scores or Rasch measures on the two subsets. If the correlation is close to 1.0 (especially when disattenuated for measurement error - RMT 10:1, 479 www.rasch.org/rmt/rmt101g.htm) then we have falsified the empirical two-dimensional finding for this sample.
If the correlation between the two subsets is close to 0.0, then clearly there are two dimensions. Two different dimensions have been combined into one instrument. Inferences based on either dimension are weakened by the other. Suppose that the correlation of person scores or measures is not close to 1.0, but is, say, 0.8. Then is this one dimension or two? For instance, suppose the dimensions are reading and arithmetic for grade-school children. We see immediately that, for the purposes of instruction, they are different dimensions, but for the purposes of school administration, such as advancing the child to the next grade, they are different strands within the same "educational achievement" variable.
Consequently, from the Rasch perspective, the more accurate method for investigating multidimensionality is the method which provides the best guidance about the threat to the validity of the additive measures. FA may identify (or fail to identify) dimensions, but it provides uncertain information on which to base decisions about the threat to additive measurement.
John Michael Linacre
More Objections to the Rasch Model, J.M. Linacre ... Rasch Measurement Transactions, 2010, 24:3 p. 1298-9
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|Aug. 14 - 16, 2019. Wed.-Fri.||An Introduction to Rasch Measurement: Theory and Applications (workshop led by Richard M. Smith) https://www.hkr.se/pmhealth2019rs|
|August 25-30, 2019, Sun.-Fri.||Pacific Rim Objective Measurement Society (PROMS) 2019, Surabaya, Indonesia https://proms.promsociety.org/2019/|
|Oct. 11 - Nov. 8, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Nov. 3 - Nov. 4, 2019, Sun.-Mon.||International Outcome Measurement Conference, Chicago, IL,http://jampress.org/iomc2019.htm|
|Jan. 24 - Feb. 21, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|May 22 - June 19, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 26 - July 24, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 7 - Sept. 4, 2020, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 9 - Nov. 6, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 25 - July 23, 2021, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
The URL of this page is www.rasch.org/rmt/rmt243f.htm