COMET November 2000

"Guidelines for Rasch Manuscripts"

There is a much more thorough version at Manuscript Guidelines for the Journal of Applied Measurement

Working Paper and Suggestions

A. Describing the problem
  1. Adequate references, at least:
    Reference to Rasch G. (1960/1980/1992)

  2. Adequate theory, at least:
    exact algebraic representation of the Rasch model(s) used

  3. Adequate description of the measurement problem:
    definition of latent variable,
    identification of facets,
    description of rating scales or response formats

B. Describing the analysis
  1. Name or adequate description of software or estimation methodology.

  2. Description of special procedures or precautions.

C. Reporting the analysis
  1. Map of linear variable as defined by items

  2. Map of distribution of sample on linear variable

  3. Report on functioning of rating scale(s), and of any procedures taken to improve measurement (e.g., category collapsing)

  4. Report on quality-control fit:
    investigation for secondary dimensions in items, persons, etc.
    investigation for local idiosyncrasies in items, persons, etc.

  5. Summary statistics on measurements:
    Separation & reliabilities
    inter-rater agreement characterization

  6. Special measurement concerns:
    Missing data: "not administered" or what?
    Folded data: how resolved?
    Nested data: how accomodated?
    Measurement vs. description facets: how disentangled?

D. Style and Terminology
  1. Use "Score" for "Raw Score", "Measure" or "calibration" for Rasch-constructed linear measures.

  2. Avoid "item response theory" as a term for Rasch measurement.

  3. Rescale from logits to user-oriented scaling.

Presented on Nov. 16, 2000, by John Michael Linacre, with comments welcomed in the comments box at www.winsteps.com , MESA Psychometric Laboratory, University of Chicago


from Richard M. Smith, Editor, Journal of Applied Measurement

To Journal of Applied Measurement

Common Oversights in Rasch Studies

1. Taking the mean and standard deviation of point biserial correlations. The statistics are more non-linear than the raw scores that we often criticize. It is best to report the median and interquartile range or to use a Fisher z transformation before you calculate a mean if you must report a mean.

2. The mean square is not a symmetric statistic. A value of 0.7 is further from 1.0 than is 1.3. If you want to use a symmetrical cutoff use 1.3 and 1.0/1.3 or 0.77.

3. Fit statistics for small sample sizes are very unstable. One or two unusual responses can produce a large fit statistic. Look at Table 11.1 in BIGSTEPS for misfitting items and count up the number of item/person residuals that are larger than 2.0. You might be surprised how few there are. Do you want to drop an item just because of a few unexpected responses?

4. It is extremely difficult to make decisions about the use of response categories in the rating scale or partial credit model if there are less than 30 persons in the sample. You might want to reserve that task until you for samples that are a little larger. If the sample person distribution is skewed you might actually need even larger sample sizes since one tail of the distribution will not be well populated. The same is true if the sample mean is offset from the mean of the item difficulties. This will result in there being few items for the extreme categories for the items opposite the concentration of the persons.

5. All of the point biserial correlations being greater that 0.30 in the rating scale and partial credit models does not lend a lot of support to the concept of unidimensionality. It is often that case that the median value of the point biserials in rating scale or partial credit data can be well above 0.70. A number of items in the 0.30 to 0.40 range in that situation would be a good sign of multidimensionality.

6. Reliability was originally conceptualized as the ratio of the true variance to the observed variance. Since there was no method in the true score model of estimating the SEM a variety of methods were developed to estimate reliability without knowing the SEM. In the Rasch model it is possible to approach reliability the way it was originally intended rather then using a less than ideal solution. Don't apologize.


Go to Top of Page
Go to Institute for Objective Measurement Page



Coming Rasch-related Events
Sept. 27-29, 2017, Wed.-Fri. In-person workshop: Introductory Rasch Analysis using RUMM2030, Leeds, UK (M. Horton), Announcement
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Oct. 25-27, 2017, Wed.-Fri. In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement
Dec. 6-8, 2017, Wed.-Fri. In-person workshop: Introductory Rasch Analysis using RUMM2030, Leeds, UK (M. Horton), Announcement
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY, www.aera.net
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

Our current URL is www.rasch.org

The URL of this page is www.rasch.org/rn9.htm