Optimizing a rating scale is "finetuning" to try to squeeze the last ounce of performance out of a test. So the first stage is to check that everything else about the test is working as well as is reasonable. For instance, there is no point in trying to optimize a rating scale if half the sample employ a "response set". Clean the data as much as possible. Put to one side for the moment clearly misfitting items and idiosyncratic people. When you have a core that looks like it should work well, take a look at the misfitting responses. Make sure that no data entry errors, random guessing, or other offdimensional "bad spots" remain. Now you are ready to begin optimizing. Remember these are only guidelines. Not all apply. Not all are good to do under all circumstances. Keep a good eye on what is happening at the item level. The more you collapse categories, the more statistical and diagnostic information you lose.
Andrich thresholds are also called Step Calibrations and Step Difficulties
Stage 
Guideline 
Measure Stability 
Measure Accuracy (Fit) 
Description of this sample 
Inference for next sample 

Pre. 
Scale oriented with latent variable 
Essential 
Essential 
Essential 
Essential 
1. 
At least 10 observations of each category. 
Essential 
Helpful 

Helpful 
2. 
Regular observation distribution. 
Helpful 


Helpful 
3. 
Observed Average measures (of the persons in the category) advance monotonically with category. 
Helpful 
Essential 
Essential 
Essential 
4. 
OUTFIT mean‑squares less than 2.0. 
Helpful 
Essential 
Helpful 
Helpful 
5. 
Andrich thresholds advance. 



Helpful 
6. 
Ratings imply measures, and measures imply ratings. 

Helpful 

Helpful 
7. 
Andrich thresholds advance by at least 1.4 logits. 



Helpful 
8. 
Andrich thresholds advance by less than 5.0 logits 
Helpful 



Summary of Guideline Pertinence. from JAM, 2002
This is an early research note. See Journal of Applied Measurement 3:1 2002 p.85106.
See also:
Optimizing Rating Scales for SelfEfficacy (and Other) Research. Smith Jr. E.V.; Wakely M.B.; de Kruif R.E.L.; Swartz C.W.
Educational and Psychological Measurement, 1 June 2003, vol. 63, no. 3, pp. 369391(23)
Abstract:
This article (a) discusses the assumptions underlying the use of rating scales, (b) describes the use of information available within the context of Rasch measurement that may be useful for optimizing rating scales, and (c) demonstrates the process in two studies. Participants in the first study were 330 fourth and fifthgrade students. Participants provided responses to the Index of SelfEfficacy for Writing. Based on category counts, average measures, thresholds and category fit statistics, the responses on the original 10point scale were better represented by a 4point scale. The modified 4point scale was given to a replication sample of 668 fourth and fifthgrade students. The rating scale structure was found to be congruent with the results from the first study. In addition, the item fit statistics and item hierarchy indicated the writing selfefficacy construct to be stable across the two samples. Combined, these results provide evidence for the generalizability of the findings and hence utility of this scale for use with samples of respondents from the same population.
Example: Guilford's Ratings of Creativity, (Psychometric Methods p.282 Guilford 1954)
++  DATA  QUALITY CONTROL RASCHANDRICH EXPECTATION  MOST  RASCH  CatResponse  Category Counts Cum. Avge Exp. OUTFIT Thresholds  Measure at PROBABLE THURSTONEPEAKCategory Score Total Used % %  Meas Meas MnSq Measure S.E.Category 0.5  from ThresholdsProb Name  +++++++  1 4 4 4% 4% .86 .72 .8  ( 2.70)  low  low 100% lowest   2 4 4 4% 8% .11 .57 2.7  .64 .53 1.65 2.21  1.75  17%   3 25 25 24% 31% .36* .40 .9  2.32 .39 .93 1.26 1.48  1.39  48%   4 8 8 8% 39% .43* .22 .5  .83 .25 .41 .66  .46  11%   5 31 31 30% 69% .04 .03 .8  1.48 .24 .02 .19 .32  .29  39% middle   6 6 6 6% 74% .46* .17 4.1  1.71 .25 .44 .23  .34  9%   7 21 21 20% 94% .45 .34 .6  1.00 .26 .94 .68 .35  .47  47%   8 3 3 3% 97% .74 .49 .5  2.36 .44 1.62 1.24  1.37  16%   9 3 3 3% 100% .77 .60 .8  .54 .60( 2.69) 2.17 1.45  1.70 100% highest +(Mean)(Modal)(Median)+
Probability Curves
3.0 2.0 1.0 0.0 1.0 2.0 3.0 +++++++++ 1     1 9  111 999   11 999  P  11 99  r  11 9  o  1 99  b  11 9  a  1 9  b  1 3 99  i  1 3333 333 77777777 9  l  133 33 555 77 7*  i  3311 355 55* 9 7  t  3 1 5533 7 55 9 77  y  33 1 55 3 77 5 99 77   33 11 5 * 55 9 77   2**2222222222222** 77 33 9*5888888888888**  2222***3 55*****44**444*6**66***8855 ***8888  3333 4****44 7******6 ******3 6666**** 7777* 0 ******************************************************************* +++++++++ 3.0 2.0 1.0 0.0 1.0 2.0 3.0
First, express the rating scale as a clearly defined, substantively relevant, ordered sequence of categories. Then use these guidelines to check it for measurement effectiveness.
Guideline 1: At least 10 observations of a category.
Andrich threshold (F_{k}) is approximately the logratio of the frequency of adjacent
categories. When category frequency is low, then the Andrich threshold is poorly estimated and
unstable.
In example: Used counts as low as 3.
Solution: combine adjacent categories, or omit observations (e.g., "don't know")
Guideline 2: Observation distribution.
Irregularity in category observation frequency signals irregularity in usage. Look for
unimodal use or peaking in a central or extreme categories.
In example: rollercoaster Used distribution.
Solution: combine adjacent categories, or omit observations (e.g., "other")
Guideline 3: Average category measures advance.
Observed Average measures (of the persons whose observations are in the category) are an empirical indicator of the context in which the category is used.
Since higher categories are intended to reflect higher measures, then the average measures are
expected to advance.
In example: average measure for category 6 is noticeably less than for category 5.
Solution: combine out of order categories with those below them.
Guideline 4: Outfit meansquares less than 2.0.
We model a definite amount of randomness in choosing categories. This amount is
indicated by a meansquare of 1.0. Values over 2.0 indicate that there is more unexpected than
expected randomness. A high meansquare value indicates that this category has been used in
contexts in which the expected category is far different.
In example: category 6 has a meansquare of 4.1.
Solution: omit observations, combine categories or drop categories.
Guideline 5: Andrich thresholds advance.
Advancing Andrich thresholds imply that each category in turn is most likely to be chosen.
This makes the probability curves look like a range of hills. Disordered Andrich thresholds imply
that a category may not be observed as one advances along the variable. Categories with narrow
definitions produce disordered Andrich thresholds. Disordered Andrich thresholds do not mean that the
categories are out of order. The decision to eliminate or combine narrow categories must be
decided substantively based on the reasons for selecting the rating categories. for developmental
scales, ordered categories support the interpretation that a rating of k implies having passed
through k1 lower categories.
In example: Andrich Threshold 3 is less than Andrich threshold 2.
Solution: combine categories, edit data, but may not be attainable.
Guideline 6: Ratings imply measures, and measures imply ratings.
This is useful for inference and for confirming the construct validity of the rating scale. Most users of your findings will
assume this is true. This is true when the observed values of the average measures measures for each category
approximate their expected values.
In example: the most conspicuous failure is category 6. The observed average measure is .46 logits. The expected
average measure is .17 logits. The difference is 0.63 logits.
Solution: combine categories, edit data. A reasonable approximation is usually attainable.
Guideline 7: Andrich thresholds advance by at least 1.4 logits.
When all Andrich threshold advances are larger than 1.4 logits, then the rating scale can be
decomposed, in theory, into a series of independent dichotomous items. Even though such
dichotomies may not be empirically meaningful, their possibility implies that the rating scale is
equivalent to a subtest of (category count  1) dichotomies. For developmental scale, this
supports the interpretation that a rating of k implies successful leaping of k hurdles.
1.4 logits lessens with more categories. In general, for m+1 categories > m dichotomous items, the
minimum thresholds are ln(x / (m+1x)) for x=1 to m.
In example: this is not seen, due to disordering.
Solution: combine categories, edit data, but may not be attainable.
Guideline 8: Andrich thresholds advance by less than 5.0 logits
When adjacent Andrich thresholds are too far apart, then a category becomes too wide and
a less informative dead zone appears in the middle of the category. This corresponds to a sag
in the statistical information available from the item. Often this results from Guttmanstyle
(forced consensus) rating procedures.
In example: this is not seen. The thresholds are close together.
Solution: define more categories; change rating procedures.
MESA Research Note #2 by John Michael Linacre
Midwest Objective Measurement Seminar, Chicago, June 1997
Go to Top of Page
Go to Institute for Objective Measurement Page
Coming Raschrelated Events  

March 21, 2019, Thur.  13th annual meeting of the UK Rasch user group, Cambridge, UK, http://www.cambridgeassessment.org.uk/events/ukraschusergroup2019 
April 4  8, 2019, Thur.Mon.  NCME annual meeting, Toronto, Canada,https://ncme.connectedcommunity.org/meetings/annual 
April 5  9, 2019, Fri.Tue.  AERA annual meeting, Toronto, Canada,www.aera.net/EventsMeetings/AnnualMeeting 
April 12, 2019, Fri.  Online course: Understanding Rasch Measurement Theory  Master's Level (G. Masters), https://www.acer.org/au/professionallearning/postgraduate/rasch 
May 24  June 21, 2019, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
May 22  30, 2019, Wed.Thu.  Measuring and scale construction (with the Rasch Model), University of Manchester, England, https://www.cmist.manchester.ac.uk/study/short/intermediate/measurementwiththeraschmodel/ 
June 4  7, 2019, Tue.Fri.  InPerson Italian Rasch Analysis Workshop based on RUMM (Fabio La Porta and Serena Caselli; entirely in Italian). Prof David Andrich from Western Australia University will be hosted by the workshop. For enquiries and registration email to workshop.rasch@gmail.com 
June 1719, 2019, Mon.Wed.  Inperson workshop, Melbourne, Australia: Applying the Rasch Model in the Human Sciences: Introduction to Rasch measurement (Trevor Bond, Winsteps), Announcement 
June 2021, 2019, Thurs.Fri.  Inperson workshop, Melbourne, Australia: Applying the Rasch Model in the Human Sciences: Advanced Rasch measurement with Facets (Trevor Bond, Facets), Announcement 
June 28  July 26, 2019, Fri.Fri.  Online workshop: Practical Rasch Measurement  Further Topics (E. Smith, Winsteps), www.statistics.com 
July 25, 2019, Tue.Fri.  2019 International Measurement Confederation (IMEKO) Joint Symposium, St. Petersburg, Russia,https://imeko19spb.org 
July 1112 & 1519, 2019, Thu.Fri.  A Course in Rasch Measurement Theory (D.Andrich), University of Western Australia, Perth, Australia, flyer  http://www.education.uwa.edu.au/ppl/courses 
Aug 5  10, 2019, Mon.Sat.  6th International Summer School "Applied Psychometrics in Psychology and Education", Institute of Education at HSE University Moscow, Russia.https://ioe.hse.ru/en/announcements/248134963.html 
Aug. 9  Sept. 6, 2019, Fri.Fri.  Online workshop: ManyFacet Rasch Measurement (E. Smith, Facets), www.statistics.com 
August 2530, 2019, Sun.Fri.  Pacific Rim Objective Measurement Society (PROMS) 2019, Surabaya, Indonesia https://proms.promsociety.org/2019/ 
Oct. 11  Nov. 8, 2019, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
Nov. 3  Nov. 4, 2019, Sun.Mon.  International Outcome Measurement Conference, Chicago, IL,http://jampress.org/iomc2019.htm 
Jan. 24  Feb. 21, 2020, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
May 22  June 19, 2020, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
June 26  July 24, 2020, Fri.Fri.  Online workshop: Practical Rasch Measurement  Further Topics (E. Smith, Winsteps), www.statistics.com 
Aug. 7  Sept. 4, 2020, Fri.Fri.  Online workshop: ManyFacet Rasch Measurement (E. Smith, Facets), www.statistics.com 
Oct. 9  Nov. 6, 2020, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
June 25  July 23, 2021, Fri.Fri.  Online workshop: Practical Rasch Measurement  Further Topics (E. Smith, Winsteps), www.statistics.com 
Our current URL is www.rasch.org
The URL of this page is www.rasch.org/rn2.htm