Anchoring & Standard-Errors

Item bank construction, computer-adaptive testing and test equating are some occasions when item difficulty estimates are used as though they are exact values. But we can never observe the exact values. All we ever have are estimates. How much does using estimates as exact values affect later measurement?

Wright & Panchapakesan (1969) found that treating item difficulty estimates as exact values had negligible effect on the ability measures. Robert Mislevy has published further reassuring research: The variance of Rasch ability estimates from partially-known item parameters. RR-92-9-ONR, 1992, ETS, Princeton NJ. He discovers that anchoring item difficulties at previous estimates in order to measure person abilities from new data only slightly lessens measure precision.

Rasch calibration programs generally report a modelled asymptotic standard error for each measure. This is the smallest possible value of the standard error, i.e., the highest possible precision the measure could have. When anchored item estimates are derived from a calibrating test administered to only a few people, then those item difficulties are necessarily imprecise. This imprecision carries forward into later measures computed using those difficulties. This extra imprecision can be acknowledged by inflating these measures' standard errors.

Mislevy reports that, for a reasonably constructed calibrating test, "even with a calibration sample of only 50 examinees, estimation variance for subsequent [targeted ability] estimates increases by only about 5 percent." This corresponds to a 2.5% increase in standard error - a trivial amount. Since the increase in error variance is inversely proportional to the size of the calibrating sample, the increase in standard error reduces to about 1% for a calibrating sample of 125. Such increases are considerably less than the typical inflation in error size made when the analyst encounters unmodelled misfit in the data.

For practical purposes, the imprecision in anchor values can be ignored. Quality control is still required, however, to insure that anchored items function in qualitatively the same way whenever they are used. Noticeable changes in an item's difficulty are more often caused by a substantive change in item effect than by some random effect in the distribution of the persons' responses.

Anchoring & Standard-Errors, B Wright … Rasch Measurement Transactions, 1993, 6:4 p. 259

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from welcomes your comments:

Your email address (if you want us to reply):


ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website,

Coming Rasch-related Events
Oct. 6 - Nov. 3, 2023, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Facets),
Oct. 12, 2023, Thursday 5 to 7 pm Colombian timeOn-line workshop: Deconstruyendo el concepto de validez y Discusiones sobre estimaciones de confiabilidad SICAPSI (J. Escobar, C.Pardo)
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),


The URL of this page is