COMET November 2000

"Guidelines for Rasch Manuscripts"

There is a much more thorough version at Manuscript Guidelines for the Journal of Applied Measurement

Working Paper and Suggestions

A. Describing the problem
  1. Adequate references, at least:
    Reference to Rasch G. (1960/1980/1992)

  2. Adequate theory, at least:
    exact algebraic representation of the Rasch model(s) used

  3. Adequate description of the measurement problem:
    definition of latent variable,
    identification of facets,
    description of rating scales or response formats

B. Describing the analysis
  1. Name or adequate description of software or estimation methodology.

  2. Description of special procedures or precautions.

C. Reporting the analysis
  1. Map of linear variable as defined by items

  2. Map of distribution of sample on linear variable

  3. Report on functioning of rating scale(s), and of any procedures taken to improve measurement (e.g., category collapsing)

  4. Report on quality-control fit:
    investigation for secondary dimensions in items, persons, etc.
    investigation for local idiosyncrasies in items, persons, etc.

  5. Summary statistics on measurements:
    Separation & reliabilities
    inter-rater agreement characterization

  6. Special measurement concerns:
    Missing data: "not administered" or what?
    Folded data: how resolved?
    Nested data: how accomodated?
    Measurement vs. description facets: how disentangled?

D. Style and Terminology
  1. Use "Score" for "Raw Score", "Measure" or "calibration" for Rasch-constructed linear measures.

  2. Avoid "item response theory" as a term for Rasch measurement.

  3. Rescale from logits to user-oriented scaling.

Presented on Nov. 16, 2000, by John Michael Linacre, with comments welcomed in the comments box at , MESA Psychometric Laboratory, University of Chicago

from Richard M. Smith, Editor, Journal of Applied Measurement

To Journal of Applied Measurement

Common Oversights in Rasch Studies

1. Taking the mean and standard deviation of point biserial correlations. The statistics are more non-linear than the raw scores that we often criticize. It is best to report the median and interquartile range or to use a Fisher z transformation before you calculate a mean if you must report a mean.

2. The mean square is not a symmetric statistic. A value of 0.7 is further from 1.0 than is 1.3. If you want to use a symmetrical cutoff use 1.3 and 1.0/1.3 or 0.77.

3. Fit statistics for small sample sizes are very unstable. One or two unusual responses can produce a large fit statistic. Look at Table 11.1 in BIGSTEPS for misfitting items and count up the number of item/person residuals that are larger than 2.0. You might be surprised how few there are. Do you want to drop an item just because of a few unexpected responses?

4. It is extremely difficult to make decisions about the use of response categories in the rating scale or partial credit model if there are less than 30 persons in the sample. You might want to reserve that task until you for samples that are a little larger. If the sample person distribution is skewed you might actually need even larger sample sizes since one tail of the distribution will not be well populated. The same is true if the sample mean is offset from the mean of the item difficulties. This will result in there being few items for the extreme categories for the items opposite the concentration of the persons.

5. All of the point biserial correlations being greater that 0.30 in the rating scale and partial credit models does not lend a lot of support to the concept of unidimensionality. It is often that case that the median value of the point biserials in rating scale or partial credit data can be well above 0.70. A number of items in the 0.30 to 0.40 range in that situation would be a good sign of multidimensionality.

6. Reliability was originally conceptualized as the ratio of the true variance to the observed variance. Since there was no method in the true score model of estimating the SEM a variety of methods were developed to estimate reliability without knowing the SEM. In the Rasch model it is possible to approach reliability the way it was originally intended rather then using a less than ideal solution. Don't apologize.

Go to Top of Page
Go to Institute for Objective Measurement Page

Coming Rasch-related Events
Oct. 7 - Nov. 4, 2022, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Nov. 2 - 30, 2022, Wed.-Wed. On-line course: Intermediate/Advanced Rasch Analysis (M. Horton, RUMM2030),
Dec. 1 - 3, 2022, Thur.-Sat. In-person Conference: Pacific Rim Objective Measurement Symposium (PROMS) 2022
Jan. 25 - March 8, 2023, Wed..-Wed. On-line course: Introductory Rasch Analysis (M. Horton, RUMM2030),
June 23 - July 21, 2023, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
Aug. 11 - Sept. 8, 2023, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),


Our current URL is

The URL of this page is