COMET November 2000

"Guidelines for Rasch Manuscripts"

There is a much more thorough version at Manuscript Guidelines for the Journal of Applied Measurement

Working Paper and Suggestions

A. Describing the problem
  1. Adequate references, at least:
    Reference to Rasch G. (1960/1980/1992)

  2. Adequate theory, at least:
    exact algebraic representation of the Rasch model(s) used

  3. Adequate description of the measurement problem:
    definition of latent variable,
    identification of facets,
    description of rating scales or response formats

B. Describing the analysis
  1. Name or adequate description of software or estimation methodology.

  2. Description of special procedures or precautions.

C. Reporting the analysis
  1. Map of linear variable as defined by items

  2. Map of distribution of sample on linear variable

  3. Report on functioning of rating scale(s), and of any procedures taken to improve measurement (e.g., category collapsing)

  4. Report on quality-control fit:
    investigation for secondary dimensions in items, persons, etc.
    investigation for local idiosyncrasies in items, persons, etc.

  5. Summary statistics on measurements:
    Separation & reliabilities
    inter-rater agreement characterization

  6. Special measurement concerns:
    Missing data: "not administered" or what?
    Folded data: how resolved?
    Nested data: how accomodated?
    Measurement vs. description facets: how disentangled?

D. Style and Terminology
  1. Use "Score" for "Raw Score", "Measure" or "calibration" for Rasch-constructed linear measures.

  2. Avoid "item response theory" as a term for Rasch measurement.

  3. Rescale from logits to user-oriented scaling.

Presented on Nov. 16, 2000, by John Michael Linacre, with comments welcomed in the comments box at , MESA Psychometric Laboratory, University of Chicago

from Richard M. Smith, Editor, Journal of Applied Measurement

To Journal of Applied Measurement

Common Oversights in Rasch Studies

1. Taking the mean and standard deviation of point biserial correlations. The statistics are more non-linear than the raw scores that we often criticize. It is best to report the median and interquartile range or to use a Fisher z transformation before you calculate a mean if you must report a mean.

2. The mean square is not a symmetric statistic. A value of 0.7 is further from 1.0 than is 1.3. If you want to use a symmetrical cutoff use 1.3 and 1.0/1.3 or 0.77.

3. Fit statistics for small sample sizes are very unstable. One or two unusual responses can produce a large fit statistic. Look at Table 11.1 in BIGSTEPS for misfitting items and count up the number of item/person residuals that are larger than 2.0. You might be surprised how few there are. Do you want to drop an item just because of a few unexpected responses?

4. It is extremely difficult to make decisions about the use of response categories in the rating scale or partial credit model if there are less than 30 persons in the sample. You might want to reserve that task until you for samples that are a little larger. If the sample person distribution is skewed you might actually need even larger sample sizes since one tail of the distribution will not be well populated. The same is true if the sample mean is offset from the mean of the item difficulties. This will result in there being few items for the extreme categories for the items opposite the concentration of the persons.

5. All of the point biserial correlations being greater that 0.30 in the rating scale and partial credit models does not lend a lot of support to the concept of unidimensionality. It is often that case that the median value of the point biserials in rating scale or partial credit data can be well above 0.70. A number of items in the 0.30 to 0.40 range in that situation would be a good sign of multidimensionality.

6. Reliability was originally conceptualized as the ratio of the true variance to the observed variance. Since there was no method in the true score model of estimating the SEM a variety of methods were developed to estimate reliability without knowing the SEM. In the Rasch model it is possible to approach reliability the way it was originally intended rather then using a less than ideal solution. Don't apologize.

Go to Top of Page
Go to Institute for Objective Measurement Page

Please help with Standard Dataset 4: Andrich Rating Scale Model

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from welcomes your comments:
Please email inquiries about Rasch books to books \at/

Your email address (if you want us to reply):


FORUMRasch Measurement Forum to discuss any Rasch-related topic

Coming Rasch-related Events
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil,
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia,
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia.
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan,
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago,
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Oct. 25-27, 2017, Wed.-Fri. In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY,
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src=""></script>


Our current URL is

The URL of this page is