There are many misfit indices, but they are not of equal utility. Though all fit indices flag data departures from model specifications, most departures are mere wrinkles, others are pot-holes, but some are crevasses. The analyst must circumvent the crevasses before bothering with the wrinkles. Most theoretical discussions focus on one type of misfit at a time, providing little guidance for the practitioner. Examination of empirical data sets, however, quickly identifies what needs to be investigated first.
TenVergert, Gillespie and Kingma (TGK, 1993) construct Rasch measures from the responses of 1299 subjects to 4 items of the Reiss Premarital Sexual Permissiveness Scale. They use a log- linear method implemented with SPSS. The equivalent logit-linear Rasch model for these dichotomous responses is:
loge (Pni1/Pni0) = Bn + Ei
where Bn is the ability of person n, and Ei the easiness of item i. In addition to reporting the item calibrations (without standard errors!), TGK provide an evaluation of item fit (see Table 1).
Facets TGK Item Point-Biserial INFIT/OUTFIT Log-linear rpbs Fit Report Fit Report FSL .41 Muted VLI FSC -.16 Noisy OK FSA .25 OK VLI FSE -.06 OK VLI
Table 1. Fit analysis.
Note: VLI = "Violates Local Independence"
TGK's log-linear analysis investigated fit to the Rasch specification of local independence. Local deviations from the model specification of local independence "can be measured by the size of residual covariances. Unfortunately, some computer programs for fitting the Rasch model do not give any information about these. A choice would be to examine the covariance matrix of the item residuals, not the sizes of the residuals themselves, to see if the items are indeed conditionally uncorrelated, as required by the principle of local independence" (McDonald 1985 p. 212). TGK report that three of their four items "violate local independence".
TGK's analysis was repeated using Facets. Facets' INFIT and OUTFIT are concerned with the size and distribution of residuals, not with their independence. Item FSL is reported to have the highest point-biserial correlation, rpbs. Conventional interpretation of rpbs would evaluate this as the best item. Facets detects that responses to this item are deficient in stochasticity and so problematic. TGK detects that this item lacks local independence.
TGK disagree with Facets and rpbs about Item FSC. According to TGK, it is the best item, because it is locally independent. For Facets INFIT and OUTFIT statistics and rpbs, it is the worst. Facets evaluates Item FSC to be the most obviously misfitting, because two males assented to this difficult item, but dissented from the three easier items. TGK's local independence analysis failed to identify the most blatant unmodelled behavior in the data. What TGK detected as independence, Facets identified as noise.
According to Facets and rpbs, Item FSA is acceptable. According to TGK, it is defective. According to Facets, Item FSE is also acceptable. According to TGK and rpbs, it is defective. Analysis of the matrix of standardized residuals identifies as most problematic the large correlation of -0.5 between the standardized residuals for Items FSA and FSE. Other inter-item correlations are much smaller. There is an empirical local dependency between FSA and FSE which is masked in the Facets INFIT and OUTFIT statistics by the generally stochastic pattern of interactions between all items.
These results enable us to prioritize fit indicators:
1) A negative rpbs indicates that success on the item is not associated with higher scores on the test. Unless this is an adaptive test, negative (or very low) rpbs probably contradict our definition of the variable. Often they point to miskeyed items or items with ambiguous or negatively worded stems. But once negative (or very low) rpbs have been investigated, differences in sizes between positive rpbs have little diagnostic power, due to their local dependence on targeting.
2) Misfit detected by OUTFIT and INFIT is caused by aberrant single responses or aberrant response patterns of responses within individual items. These patterns may be due to unpredicted or overly predictable responses. They reflect directly on the measuring power of individual items, and may motivate dropping an item from the analysis (e.g., a flawed item), or side-lining individual responses (e.g., response sets), or splitting the original item into several items according to respondents' response style (e.g., a curriculum-dependent item).
3) Despite the concern, often expressed in the literature, that local independence is the sine qua non of Rasch measurement, it turns out to be a tertiary consideration in practice. Local independence addresses the relationships between items. But these relationships have little practical meaning until there is evidence that the component items appear to be effective measurement devices. Rogue observation patterns to individual items are a more immediate threat to measure validity. Lack of local independence is manifested by large correlations between standardized residuals. Diagnosing the reasons for large correlations, however, requires examination of item content and response structures for pairs of items [using, for instance, principal components analysis PCA of residuals]. These investigations are more arduous than the inspection of aberrations in single items. Remedying defects is also more difficult.
McDonald RP (1985) Factor Analysis and Related Methods. Hillsdale, NJ: Lawrence Erlbaum.
TenVergert E, Gillespie M, & Kingma J (1993) Testing the assumptions and interpreting the results of the Rasch model using log-linear procedures in SPSS. Behavior Research Methods, Instruments, and Computers 25(3) 350-359.
Prioritizing misfit indicators: an Insight based on Log-Linear Rasch Modeling. Linacre JM. Rasch Measurement Transactions, 1995, 9:2 p.422
Please help with Standard Dataset 4: Andrich Rating Scale Model
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|July 31 - Aug. 3, 2017, Mon.-Thurs.||Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br|
|Aug. 7-9, 2017, Mon-Wed.||In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia|
|Aug. 7-9, 2017, Mon-Wed.||PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/|
|Aug. 10, 2017, Thurs.||In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm|
|Aug. 11 - Sept. 8, 2017, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Aug. 18-21, 2017, Fri.-Mon.||IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org|
|Sept. 15-16, 2017, Fri.-Sat.||IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm|
|Oct. 13 - Nov. 10, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Oct. 25-27, 2017, Wed.-Fri.||In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement|
|Jan. 5 - Feb. 2, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 10-16, 2018, Wed.-Tues.||In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement|
|Jan. 17-19, 2018, Wed.-Fri.||Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website|
|April 13-17, 2018, Fri.-Tues.||AERA, New York, NY, www.aera.net|
|May 25 - June 22, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 27, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 10 - Sept. 7, 2018, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 12 - Nov. 9, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|The HTML to add "Coming Rasch-related Events" to your webpage is:|
The URL of this page is www.rasch.org/rmt/rmt92b.htm