Confirmatory factor analysis vs. Rasch approaches:
Differences and Measurement Implications

1 Fundamental and theoretical issues of measurement
Concept of Measurement· Based on classical test theory (CTT)
· Numbers are assigned to respondents' attributes (Stevens 1946, 1951)
· The measure of a magnitude of a quantitative attribute is its ratio to the unit of measurement, the unit of measurement is that magnitude of the attribute whose measure is 1 (Michell 1999, p.13)
· Measurement is the process of discovering ratios rather than assigning numbers
· Rasch Model is in line with axiomatic framework of measurement
· Principle of specific objectivity
Modelxii + λijξj + δi
xi ... manifest item score
τi ... item intercept parameter
λij ... factor loading of item i at factor j
ξj... factor score of factor j
δi... stochastic error term
For dichotomous data:
P(aνi=1)= eνi) / [1 + eνi)]
aνi ... response of person ν to item i
βν ... person location parameter
δi ... item location parameter (endorsability)
Relationship of measure and indicators (items)· Measure is directly and linearly related to the indicators
· Hence, the weighted raw score is considered to be a linear measure
· Probability of a response is modeled as a logistic function of two measures, the person parameter bv and the item location (endorsability) di
· Raw score is not considered to be a linear measure, transformation of raw scores into logits (Wright 1996, p.10)
In/dependence of samples and parametersParameters are sample dependent, representative samples are importantItem parameters are independent of sample used (subject to model fit and sufficient targeting)
2 Item selection and sampling (scale efficiency) issues
Item selection· Items selected to maximize reliability, leads to items that are equivalent in terms of endorsability which plays no explicit role in CTT
· Favors items that are similar to each other (see bandwidth-fidelity problem, Singh 2004)
· Items are selected to cover a wide range of the dimension (see 'bandwidth', Singh 2004)
· Endorsability of item plays a key role
Item discrimination· Discrimination varies from item to item but is considered fixed within an item· Discrimination is equal for all items to retain a common order of all items in terms of endorsability for all respondents
· Discrimination varies within an item (concept of information which equals P (avi=1)* P (avi=0) in the dichotomous case), it reaches its maximum at bv = di
TargetingItems that are off-target may even increase reliability and feign a small standard error which can actually be quite largeItems that are off-target provide less information, standard errors will increase and the power of the test of fit will decrease
Standard error of measurementBased on reliability, assumed to be equal across the whole rangeBased on the information the items yield for a specific person
Sample sizeThe required sample size mirrors recommendations for structural equation modeling (SEM). SEM is not appropriate for sample sizes below 100. As a rule of thumb sample sizes of greater than 200 are suggested (Boomsma 1982; Marsh, Balla, and McDonald 1988). Bentler and Chou (1987) recommend a minimum ratio of 5:1 between sample size and the number of free parameter to be estimated.In general, the sample sizes used in structural equation modeling are sufficient but insufficient targeting increases the sample size needed. According to Linacre (1994) the minimum sample size ranges from 108 to 243 depending on the targeting with n=150 sufficient for most purposes (for item calibrations stable within ± 0.5 logits and .99 confidence)
Distribution of personsCommonly assumed to be normalIrrelevant due to specific objectivity (subject to sufficient targeting)
Missing dataProblematic, missing data has to be imputed, deleting persons may alter the standardizing sample, deleting items may alter the construct, pairwise deletion biases the factors (Wright 1996, p.10)Estimation of person and item parameters not affected by missing data (except for larger standard errors)
Interpretation of person measuresUsually in reference to sample meanIn reference to the items defining the latent dimension
3 Dimensionality issues
Multi-dimensionalityMulti-dimensionality easily accounted forA priori multi-dimensional constructs are split up into separate dimensions
Directional factorsSensitivity to directional factors (Singh 2004) in case of items worded in different directionsLow sensitivity to directional factors (Singh 2004)
4 Investigation of comparability of measures across groups
Assessment of scale equivalence· Multi-group analysis
· Equivalence statements of parameters estimated across groups
· Differential item functioning analysis (DIF) capitalizing on the principle of specific objectivity
· Analysis of residuals in different groups
Incomplete equivalencePartial invariance (for group specific items separate loadings and/or intercepts are estimated)Item split due to DIF (for group specific items separate item locations are estimated)
Typical sequence and principal steps of analysis· Estimation of baseline model (group specific estimates of loadings and item intercepts)
· equality constraints imposed on loadings (metric invariance)
· equality constraints imposed on intercepts (scalar invariance)
· selected constraints lifted if necessary (partial invariance)
· estimation of model across groups
· collapsing of categories if necessary
· assessment of fit
· assessment of DIF
· items displaying DIF are split up if necessary
Etic (external) versus emic (internal)· In principle etic-oriented approach. A common set of invariant items is indispensable.
· Concept of partial invariance allows for equal items functioning differently.
· Emic items, i.e. items confined to one group, can be considered but technical set-up complicated compared to Rasch analysis
· In principle etic-oriented approach. A common set of invariant items is indispensable.
· Accounting for DIF by splitting the item allows for equal items functioning differently.
· Emic items, i.e. items confined to one group, can be considered very easily because handling of missing data is unproblematic compared to CFA

Table 1 in Ewing, Michael T., Thomas Salzberger, and Rudolf R. Sinkovics (2005), "An Alternate Approach to Assessing Cross-Cultural Measurement Equivalence in Advertising Research," Journal of Advertising, 34 (1), 17-36.

Courtesy of Rudolf Sinkovics, with permission.

For more information,
The Impact of Rasch Item Difficulty on Confirmatory Factor Analysis , S.V. Aryadoust … Rasch Measurement Transactions, 2009, 23:2 p. 1207
Confirmatory factor analysis vs. Rasch approaches: Differences and Measurement Implications, M.T. Ewing, T. Salzberger, R.R. Sinkovics … Rasch Measurement Transactions, 2009, 23:1 p. 1194-5
Conventional factor analysis vs. Rasch residual factor analysis, Wright, B.D. … 2000, 14:2 p. 753.
Rasch Analysis First or Factor Analysis First? Linacre J.M. … 1998, 11:4 p. 603.
Factor analysis and Rasch analysis, Schumacker RE, Linacre JM. … 1996, 9:4 p.470
Too many factors in Factor Analysis? Bond TG. … 1994, 8:1 p.347
Comparing factor analysis and Rasch measurement, Wright BD. … 1994, 8:1 p.350
Factor analysis vs. Rasch analysis of items, Wright BD. … 5:1 p.134

Ewing M.T., Salzberger T., Sinkovics R.R. (2009) Confirmatory factor analysis vs. Rasch approaches: Differences and Measurement Implications, Rasch Measurement Transactions, 2009, 23:1, 1194-5

Please help with Standard Dataset 4: Andrich Rating Scale Model

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from welcomes your comments:

Your email address (if you want us to reply):


ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website,

Coming Rasch-related Events
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil,
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia,
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia.
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan,
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago,
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Oct. 25-27, 2017, Wed.-Fri. In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY,
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src=""></script>


The URL of this page is