Misfit Statistics for Rating Scale Categories

Analysis of ordinal observations has seldom included investigation into whether each invited category is performing as intended. Even in Rasch analysis such investigation has not been routine, partly because diagnostically useful fit statistics were not available. Now, analyses based on recent versions of BIGSTEPS indicate that two useful sets of statistics have been formulated.

Average Measure Difference

Implicit in the use of ordinal observations is the specification that the higher the category number, the more of the latent variable is evidenced. Thus, on average, "better" performers should produce higher ratings than "worse" performers. Also "easier" items should manifest higher ratings than "harder" items.

The observation Xni=x is modelled as governed by the difference between person n's ability Bn and item i's difficulty Di. The effect off the measure difference Bn-Di is observed as Xni=x. Examining the entire data set, the average measure difference (AMD) modelled to produce an observation x is

sum [over (Xni=x)] (Bn - Di) / sum(1) for i=1,L n=1,N

The AMD for each category can be computed. Since "more" of the rating scale is modelled to reflect "more" of the underlying variable, the AMDs are expected to increase up the rating scale. This pattern can be seen in Example 1, a well-behaved rating scale. As the categories ascend from 0 to 4, the AMDs increase from -2.34 to 2.21 logits.

Category           Average             Average
Number             Measure             Measure
                Difference (AMD)    Difference (AMD)
----------------------------------------------------
0                -2.34                -2.34
1                -1.56                -1.56
2                 0.12                 1.57
3                 1.57                 0.12
4                 2.21                 2.21
----------------------------------------------------
               Example 1:             Example 2:
              Well-behaved            Problematic
              rating scale            rating scale

Example 2, a problematic scale, shows a different pattern. The AMDs are ascending for the most part, except AMD for category 3 which, at 0.12, is less than that for category 2 at 1.57. This suggests that category 2 is not "less" than category 3 in practice, despite the scale designer's intention. A common cause of this is the use of the central option of a five category Likert scale to signify "No Opinion" or "Don't Know". "Don't know" either investigates behavior on a dimension different from the other categories or enables the respondent to escape from answering the question.

Here are some remedies to disordered AMDs, such as Example 2:

1) Some or all of the observations in categories 2 or 3 can be treated as missing. Indeed, if category 2 is off-dimension or used idiosyncratically, then it is not measuring the desired dimension and all observations in category 2 could be treated as missing.

2) Closer examination of the definitions of categories 2 and 3 may indicate that reversing their order would maintain the ordinally ascending meaning of the scale. Simply recode all 2's as 3's and all 3's as 2's.

3) Third, the difference between a "2" and a "3" may not be clear to respondents, e.g., the difference between "often" and "nearly always". Then categories "2" and "3" can be joined into one category, numbered 2, so that category 4 now becomes 3.

When combining or deleting categories, aim toward equalizing the category frequencies as much as possible, so that each category contributes about equally to the measurement process.

Observed / Expected Mean-Square Fit Ratios

AMDs can be correctly ordered, but the categories themselves still be used haphazardly. The modelled raw-score variance of an observation, Xni, on a rating scale is

Vni = sum from k=0 to m (k-Eni)^2 Pnik

The observed squared residual of Xni is

(Xni - Eni)^2

Summing these variances across the data and partitioning by rating scale category, the variance explained by ratings in category x is modelled to be

Mx = sum over all Xni of (x-Eni)^2 Pnix

The observed residual sum of squares due to ratings of Xni=x is

Ox = Sum over (Xni=x) (x-Eni)^2

When the data fit the model, the modelled variance approximates the residual sum of squares. Differences are diagnostic of misfit.

The INFIT statistic, Vx, summarizes their agreement for category x:

Vx = Ox/Mx

This fit ratio has a mean-square form with expectation 1.0, and range 0 to infinity. Values greater than 1.0 indicate improbable category use. Values less than 1.0 indicate overly predictable category use.

The squared standardized residual for an observation of Xni=x is

Znix^2 = (x-Eni)^2/Vni

Summing these terms across the data and partitioning by rating scale category, the contribution of category x is modelled to be

M'x = Sum (Znix^2 Pnix)

The observed sum of squared standardized residuals for observations of Xni=x is

O'x = Sum for (Xni=x) Znix^2

Again, when the data fit the model, the observed sum approximates the modelled sum.

The OUTFIT mean-square for observations in category x is the ratio of observed to expected sum-of-squared standardized residuals, Ux:

Ux = o'x/M'x

This fit ratio is also a mean-square with expectation 1.0 and range 0 to infinity. Values greater than 1.0 indicate improbable category use. Values less than 1.0 indicate overly predictable category use.

The Table shows the results for the familiar "Liking for Science" data set. AMDs exhibit the desired monotonically ascending pattern. The OUTFIT mean-squares, however, show some unwanted behavior. The bottom category with mean-square 1.20 is used approximately as modelled. The central category with mean-square .69 is overly predictable. This suggests that some children responding to this survey avoided making other than obvious choices. One child responded in the central category to every item. The top category, with mean-square 1.47, manifests improbable observations. A few children liked activities that they were expected to dislike. These activities included "watching rats" and "finding old bottles". From the perspective of measuring "Liking for Science", these idiosyncratic ratings are off-dimension and so perturb the measuring system. Measurement would be improved by recoding these inconsistent ratings as missing.

The INFIT mean-squares, which are more sensitive to idiosyncratic usage of adjacent categories, are within their typical range.

Category      Count   AMD    INFIT Mean-square   OUTFIT
0 "dislike"    378    -.87      1.09              1.02
1 "neutral"    620     .13       .86               .69
2 "like"       852    2.21      1.00              1.47

Misfit Statistics for Rating Scale Categories. Linacre JM. … Rasch Measurement Transactions, 1995, 9:3 p.450



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt93j.htm

Website: www.rasch.org/rmt/contents.htm