COMET November 2000

"Guidelines for Rasch Manuscripts"

There is a much more thorough version at Manuscript Guidelines for the Journal of Applied Measurement

Working Paper and Suggestions

A. Describing the problem
  1. Adequate references, at least:
    Reference to Rasch G. (1960/1980/1992)

  2. Adequate theory, at least:
    exact algebraic representation of the Rasch model(s) used

  3. Adequate description of the measurement problem:
    definition of latent variable,
    identification of facets,
    description of rating scales or response formats

B. Describing the analysis
  1. Name or adequate description of software or estimation methodology.

  2. Description of special procedures or precautions.

C. Reporting the analysis
  1. Map of linear variable as defined by items

  2. Map of distribution of sample on linear variable

  3. Report on functioning of rating scale(s), and of any procedures taken to improve measurement (e.g., category collapsing)

  4. Report on quality-control fit:
    investigation for secondary dimensions in items, persons, etc.
    investigation for local idiosyncrasies in items, persons, etc.

  5. Summary statistics on measurements:
    Separation & reliabilities
    inter-rater agreement characterization

  6. Special measurement concerns:
    Missing data: "not administered" or what?
    Folded data: how resolved?
    Nested data: how accomodated?
    Measurement vs. description facets: how disentangled?

D. Style and Terminology
  1. Use "Score" for "Raw Score", "Measure" or "calibration" for Rasch-constructed linear measures.

  2. Avoid "item response theory" as a term for Rasch measurement.

  3. Rescale from logits to user-oriented scaling.

Presented on Nov. 16, 2000, by John Michael Linacre, with comments welcomed in the comments box at www.winsteps.com , MESA Psychometric Laboratory, University of Chicago


from Richard M. Smith, Editor, Journal of Applied Measurement

To Journal of Applied Measurement

Common Oversights in Rasch Studies

1. Taking the mean and standard deviation of point biserial correlations. The statistics are more non-linear than the raw scores that we often criticize. It is best to report the median and interquartile range or to use a Fisher z transformation before you calculate a mean if you must report a mean.

2. The mean square is not a symmetric statistic. A value of 0.7 is further from 1.0 than is 1.3. If you want to use a symmetrical cutoff use 1.3 and 1.0/1.3 or 0.77.

3. Fit statistics for small sample sizes are very unstable. One or two unusual responses can produce a large fit statistic. Look at Table 11.1 in BIGSTEPS for misfitting items and count up the number of item/person residuals that are larger than 2.0. You might be surprised how few there are. Do you want to drop an item just because of a few unexpected responses?

4. It is extremely difficult to make decisions about the use of response categories in the rating scale or partial credit model if there are less than 30 persons in the sample. You might want to reserve that task until you for samples that are a little larger. If the sample person distribution is skewed you might actually need even larger sample sizes since one tail of the distribution will not be well populated. The same is true if the sample mean is offset from the mean of the item difficulties. This will result in there being few items for the extreme categories for the items opposite the concentration of the persons.

5. All of the point biserial correlations being greater that 0.30 in the rating scale and partial credit models does not lend a lot of support to the concept of unidimensionality. It is often that case that the median value of the point biserials in rating scale or partial credit data can be well above 0.70. A number of items in the 0.30 to 0.40 range in that situation would be a good sign of multidimensionality.

6. Reliability was originally conceptualized as the ratio of the true variance to the observed variance. Since there was no method in the true score model of estimating the SEM a variety of methods were developed to estimate reliability without knowing the SEM. In the Rasch model it is possible to approach reliability the way it was originally intended rather then using a less than ideal solution. Don't apologize.


Go to Top of Page
Go to Institute for Objective Measurement Page



Coming Rasch-related Events
March 21, 2019, Thur. 13th annual meeting of the UK Rasch user group, Cambridge, UK, http://www.cambridgeassessment.org.uk/events/uk-rasch-user-group-2019
April 4 - 8, 2019, Thur.-Mon. NCME annual meeting, Toronto, Canada,https://ncme.connectedcommunity.org/meetings/annual
April 5 - 9, 2019, Fri.-Tue. AERA annual meeting, Toronto, Canada,www.aera.net/Events-Meetings/Annual-Meeting
April 12, 2019, Fri. On-line course: Understanding Rasch Measurement Theory - Master's Level (G. Masters), https://www.acer.org/au/professional-learning/postgraduate/rasch
May 24 - June 21, 2019, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 22 - 30, 2019, Wed.-Thu. Measuring and scale construction (with the Rasch Model), University of Manchester, England, https://www.cmist.manchester.ac.uk/study/short/intermediate/measurement-with-the-rasch-model/
June 17-19, 2019, Mon.-Wed. In-person workshop, Melbourne, Australia: Applying the Rasch Model in the Human Sciences: Introduction to Rasch measurement (Trevor Bond, Winsteps), Announcement
June 20-21, 2019, Thurs.-Fri. In-person workshop, Melbourne, Australia: Applying the Rasch Model in the Human Sciences: Advanced Rasch measurement with Facets (Trevor Bond, Facets), Announcement
June 28 - July 26, 2019, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
July 2-5, 2019, Tue.-Fri. 2019 International Measurement Confederation (IMEKO) Joint Symposium, St. Petersburg, Russia,https://imeko19-spb.org
July 11-12 & 15-19, 2019, Thu.-Fri. A Course in Rasch Measurement Theory (D.Andrich), University of Western Australia, Perth, Australia, flyer - http://www.education.uwa.edu.au/ppl/courses
Aug 5 - 10, 2019, Mon.-Sat. 6th International Summer School "Applied Psychometrics in Psychology and Education", Institute of Education at HSE University Moscow, Russia.https://ioe.hse.ru/en/announcements/248134963.html
Aug. 9 - Sept. 6, 2019, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
August 25-30, 2019, Sun.-Fri. Pacific Rim Objective Measurement Society (PROMS) 2019, Surabaya, Indonesia https://proms.promsociety.org/2019/
Oct. 11 - Nov. 8, 2019, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 24 - Feb. 21, 2020, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 22 - June 19, 2020, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 26 - July 24, 2020, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 7 - Sept. 4, 2020, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 9 - Nov. 6, 2020, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 25 - July 23, 2021, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com

 

Our current URL is www.rasch.org

The URL of this page is www.rasch.org/rn9.htm