The standard error of a measure captures its precision in a particular context. The accuracy of a measure is captured by fit statistics. A measure may be accurate, but imprecise. The distance from New York to Los Angeles is accurately, but imprecisely, 2,000 miles. A measure may be inaccurate, but precise. The original mirror of the Hubbell space-telescope was ground precisely, but inaccurately. After a corrective lens was installed to correct for the inaccuracy, the extra precision in grinding the mirror resulted in it functioning better than planned.
Plus or minus one standard error defines the interval around the true measure within which we expect the estimated measure to fall 68% of the time. So we expect 68% of the measures to fall with in 1 S.E. of the true measure. But, because we don't know the true measure, we usually reverse this, even though it is not strictly correct: The standard error defines the interval around the estimated measure within which we expect the true measure to fall 68% of the time. If we have a whole bunch of estimates, then we would expect 68% of them to be within 1 S.E. of the true measure, whatever that is. Only for an estimate that coincides with the true measure would 68% of all estimates be within 1 S.E. of it. In general, we would expect to find 68% of other, similar estimates within 1.4*S.E. of an estimated measure chosen at random.
Raw scores are almost always reported without their standard errors. Sometimes even Rasch measures are reported without their standard errors! This can mislead the reader into thinking they are more precise than they really are. Statistical programs often report parameter estimates to many decimal places, giving those results an apparent precision far greater than their actual precision.
When Rasch standard errors are reported, they are usually reported in terms of three possible values.
1) Local "Reference Item" Standard Error or "General" Standard error
In order to obtain a self-consistent and reproducible set of
parameter estimates, constraints must be introduced into the
estimation procedure. An origin for the scale must be set. One
choice is to establish the 0 logit point at the difficulty of a
particular item, often the first on the test. Then all other
measures become relative to this item. Accordingly the "0 logit"
item is reported with perfect precision, i.e., 0.00 logit
standard error, or the standard error for this item is not
reported at all. Rasch measures estimated by log-linear analysis
with conventional statistical packages are usually reported this
way, e.g., Agresti (1993).
This kind of standard error is of limited value. The origin of a scale is a conceptual ideal, but an item is an imperfect reality. Consequently no item can be really at the origin of the scale, it can only be usefully close to it.
The choice of which item to anchor at what measure is arbitrary. In the Table, anchoring Item 1 at 0.00 logits produces the same set of measures as anchoring Item 2 at 1.23, but the standard errors are different. They depend on the precision of each item measure relative to the measure of the arbitrarily-chosen reference item.
Item Item 1 anchored Item 2 anchored at at 0.00 1.23 Measure +- S.E. Measure +- S.E. 1 0.00+-0.00 0.00+-0.14 2 1.23+-0.14 1.23+-0.00 3 2.46+-0.15 2.46+-0.25
This local, specific interpretation of the standard error limits its usefulness in more general contexts such as comparing measures across tests and building item banks.
2) Model "Ideal" Standard Error
The highest possible precision for any measure is that obtained
when every other measure is known, and the data fit the Rasch
model. This standard error is called the "model" standard error
and is reported by most production-oriented Rasch software, such
as BIGSTEPS. For well-constructed tests with clean data (as
confirmed by the fit statistics), the model standard error is
usefully close to, but slightly smaller than, the actual standard
error.
3) Misfit-Inflated "Real" Standard Error
Wright and Panchapakesan (1969) discovered an important result
for tests in which each examinee takes more than a handful of
items, and each item is taken by more than a handful of
examinees: the imprecision introduced into the target measure by
using estimated measures for the non-target items and examinees
is negligibly small. Consequently, in almost all data sets
except those based on very short tests, it is only misfit of the
data to the model that increases the standard errors noticeably
above their model "ideal" errors.
Misfit to the model is quantified by fit statistics. But, according to the model, these fit statistics also have a stochastic component, i.e., some amount of misfit is expected in the data. Discovering "perfect" data immediately raises suspicions! Consequently, to consider that every departure of a fit statistic from its ideal value indicates failure of the data to fit the model is to take a pessimistic position. What it is useful, however, is to estimate "real" standard errors by enlarging the model "ideal" standard errors by the model misfit encountered in the data.
Recent work by Jack Stenner shows that the most useful misfit inflation formula is
Real S.E. = Model S.E. * Maximum [1.0, sqrt(INFIT mean-square)]
In practice, this "Real" S.E. sets an upper bound on measure imprecision, and so a lower bound on "Real reliability". The actual S.E. lies between the "model" and "real" values. But since we generally try to minimize or eliminate the most aberrant features of a measurement system, we will probably begin by focusing attention on the "Real" S.E. as we establish that measurement system. Once we become convinced that the departures in the data from the model are primarily due to modelled stochasticity, then we may base our decision-making on the smaller "Model" S.E. values, and the higher "model reliability" values.
Agresti A (1993) Computing conditional maximum likelihood estimates CMLE for generalized Rasch models using simple log-linear models with diagonals parameters. Scandinavian Journal of Statistics 20(1) 63-71.
Wright, B. D., and Panchapakesan, N. A. 1969. A procedure for sample-free item analysis. Educational and Psychological Measurement 29: 23-48
Which Standard Error? Item-specific or General? Ideal or Real?? Wright BD. … Rasch Measurement Transactions, 1995, 9:2 p.436
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Oct. 4 - Nov. 8, 2024, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt92n.htm
Website: www.rasch.org/rmt/contents.htm