Confirmatory factor analysis vs. Rasch approaches:
Differences and Measurement Implications

1 Fundamental and theoretical issues of measurement
Concept of Measurement· Based on classical test theory (CTT)
· Numbers are assigned to respondents' attributes (Stevens 1946, 1951)
· The measure of a magnitude of a quantitative attribute is its ratio to the unit of measurement, the unit of measurement is that magnitude of the attribute whose measure is 1 (Michell 1999, p.13)
· Measurement is the process of discovering ratios rather than assigning numbers
· Rasch Model is in line with axiomatic framework of measurement
· Principle of specific objectivity
Modelxii + λijξj + δi
xi ... manifest item score
τi ... item intercept parameter
λij ... factor loading of item i at factor j
ξj... factor score of factor j
δi... stochastic error term
For dichotomous data:
P(aνi=1)= eνi) / [1 + eνi)]
aνi ... response of person ν to item i
βν ... person location parameter
δi ... item location parameter (endorsability)
Relationship of measure and indicators (items)· Measure is directly and linearly related to the indicators
· Hence, the weighted raw score is considered to be a linear measure
· Probability of a response is modeled as a logistic function of two measures, the person parameter bv and the item location (endorsability) di
· Raw score is not considered to be a linear measure, transformation of raw scores into logits (Wright 1996, p.10)
In/dependence of samples and parametersParameters are sample dependent, representative samples are importantItem parameters are independent of sample used (subject to model fit and sufficient targeting)
2 Item selection and sampling (scale efficiency) issues
Item selection· Items selected to maximize reliability, leads to items that are equivalent in terms of endorsability which plays no explicit role in CTT
· Favors items that are similar to each other (see bandwidth-fidelity problem, Singh 2004)
· Items are selected to cover a wide range of the dimension (see 'bandwidth', Singh 2004)
· Endorsability of item plays a key role
Item discrimination· Discrimination varies from item to item but is considered fixed within an item· Discrimination is equal for all items to retain a common order of all items in terms of endorsability for all respondents
· Discrimination varies within an item (concept of information which equals P (avi=1)* P (avi=0) in the dichotomous case), it reaches its maximum at bv = di
TargetingItems that are off-target may even increase reliability and feign a small standard error which can actually be quite largeItems that are off-target provide less information, standard errors will increase and the power of the test of fit will decrease
Standard error of measurementBased on reliability, assumed to be equal across the whole rangeBased on the information the items yield for a specific person
Sample sizeThe required sample size mirrors recommendations for structural equation modeling (SEM). SEM is not appropriate for sample sizes below 100. As a rule of thumb sample sizes of greater than 200 are suggested (Boomsma 1982; Marsh, Balla, and McDonald 1988). Bentler and Chou (1987) recommend a minimum ratio of 5:1 between sample size and the number of free parameter to be estimated.In general, the sample sizes used in structural equation modeling are sufficient but insufficient targeting increases the sample size needed. According to Linacre (1994) the minimum sample size ranges from 108 to 243 depending on the targeting with n=150 sufficient for most purposes (for item calibrations stable within ± 0.5 logits and .99 confidence)
Distribution of personsCommonly assumed to be normalIrrelevant due to specific objectivity (subject to sufficient targeting)
Missing dataProblematic, missing data has to be imputed, deleting persons may alter the standardizing sample, deleting items may alter the construct, pairwise deletion biases the factors (Wright 1996, p.10)Estimation of person and item parameters not affected by missing data (except for larger standard errors)
Interpretation of person measuresUsually in reference to sample meanIn reference to the items defining the latent dimension
3 Dimensionality issues
Multi-dimensionalityMulti-dimensionality easily accounted forA priori multi-dimensional constructs are split up into separate dimensions
Directional factorsSensitivity to directional factors (Singh 2004) in case of items worded in different directionsLow sensitivity to directional factors (Singh 2004)
4 Investigation of comparability of measures across groups
Assessment of scale equivalence· Multi-group analysis
· Equivalence statements of parameters estimated across groups
· Differential item functioning analysis (DIF) capitalizing on the principle of specific objectivity
· Analysis of residuals in different groups
Incomplete equivalencePartial invariance (for group specific items separate loadings and/or intercepts are estimated)Item split due to DIF (for group specific items separate item locations are estimated)
Typical sequence and principal steps of analysis· Estimation of baseline model (group specific estimates of loadings and item intercepts)
· equality constraints imposed on loadings (metric invariance)
· equality constraints imposed on intercepts (scalar invariance)
· selected constraints lifted if necessary (partial invariance)
· estimation of model across groups
· collapsing of categories if necessary
· assessment of fit
· assessment of DIF
· items displaying DIF are split up if necessary
Etic (external) versus emic (internal)· In principle etic-oriented approach. A common set of invariant items is indispensable.
· Concept of partial invariance allows for equal items functioning differently.
· Emic items, i.e. items confined to one group, can be considered but technical set-up complicated compared to Rasch analysis
· In principle etic-oriented approach. A common set of invariant items is indispensable.
· Accounting for DIF by splitting the item allows for equal items functioning differently.
· Emic items, i.e. items confined to one group, can be considered very easily because handling of missing data is unproblematic compared to CFA

Table 1 in Ewing, Michael T., Thomas Salzberger, and Rudolf R. Sinkovics (2005), "An Alternate Approach to Assessing Cross-Cultural Measurement Equivalence in Advertising Research," Journal of Advertising, 34 (1), 17-36.

Courtesy of Rudolf Sinkovics, with permission.

For more information,
The Impact of Rasch Item Difficulty on Confirmatory Factor Analysis , S.V. Aryadoust … Rasch Measurement Transactions, 2009, 23:2 p. 1207
Confirmatory factor analysis vs. Rasch approaches: Differences and Measurement Implications, M.T. Ewing, T. Salzberger, R.R. Sinkovics … Rasch Measurement Transactions, 2009, 23:1 p. 1194-5
Conventional factor analysis vs. Rasch residual factor analysis, Wright, B.D. … 2000, 14:2 p. 753.
Rasch Analysis First or Factor Analysis First? Linacre J.M. … 1998, 11:4 p. 603.
Factor analysis and Rasch analysis, Schumacker RE, Linacre JM. … 1996, 9:4 p.470
Too many factors in Factor Analysis? Bond TG. … 1994, 8:1 p.347
Comparing factor analysis and Rasch measurement, Wright BD. … 1994, 8:1 p.350
Factor analysis vs. Rasch analysis of items, Wright BD. … 5:1 p.134

Ewing M.T., Salzberger T., Sinkovics R.R. (2009) Confirmatory factor analysis vs. Rasch approaches: Differences and Measurement Implications, Rasch Measurement Transactions, 2009, 23:1, 1194-5

Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou El modelo métrico de Rasch: Fundamentación, implementación e interpretación de la medida en ciencias sociales (Spanish Edition), Manuel González-Montesinos M.
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen
Rasch Books and Publications: Winsteps and Facets
Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Rasch Models for Solving Measurement Problems (Facets), George Engelhard, Jr. & Jue Wang Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan

To be emailed about new material on
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from welcomes your comments:

Your email address (if you want us to reply):


ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website,

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps),
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps),
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets),
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps),


The URL of this page is