Thomas Warm (1989) reports that "Lord (1983) found that maximum likelihood estimates of θ [person ability] are biased outward" and then he restates Lord's expression for the size of this bias:
Bias (θ_{MLE}) = - J / ( 2 * I^{2} )
where, for dichotomous Rasch items,
J = Σ P_{θi} (1-P_{θi} ) (1-2P_{θi} )
I = Σ P_{θi} (1-P_{θi} )
summed for all items, i = 1,L in the test, where P_{θi} is the Rasch-model probability of success of ability θ on item i.
The corrected estimate, θ_{WLE} = θ_{MLE} + (J / ( 2 * I^{2} ) ), which is almost always closer to the item mean than θ_{MLE}.
How effective is this bias correction? Warm uses a Monte Carlo study to demonstrate its effectiveness, but an exact algebraic investigation can be conducted.
Dichotomous Items
I posited a test of 25 items, with its item difficulties uniformly spaced 0.2 logits apart. Figure 1 shows the locations (x-axis) of the items on the 25-item test. The item difficulties are centered on 0 logits.
Applying the MLE method of Wright & Douglas (1996) for estimating θ from known item difficulties, a Rasch ability estimate, M(s) is obtained for each possible raw score, s=0-25, on the test of 25 items. Since the estimates corresponding to s=0 and s=25 are infinite, they are substituted by estimates corresponding to s=0.3 and s=24.7 score-points. The MLE ability estimates are shown in Figure 1.
Figure 1. MLE and WLE for 25 dichotomous items. |
Warm's bias correction is applied to each MLE estimate, M(s), to produce a Warm's Mean Likelihood Estimation (WLE) value, W(s). See Figure 1. WLE estimates are more central than the MLE estimates, except for estimates corresponding to scores of 0.3 and 24.7, where the MLE estimates are used unchanged.
Under Rasch model conditions, each raw score, s, on a given set of items, corresponds to one estimated ability θ(s), but each true (generating) ability corresponds to all possible raw scores. For 25 items, there are 2^25 = 33,554,432 possible different response strings. According to the Rasch model, each of these response strings has a finite probability of being observed for each generating ability.
Probability of response string n for ability θ = Pnθ = Π exp( (xni (θ - di) ) / (1 + exp(θ - di) )
for i = 1 to 25, where xni is the scored 0,1 response to item i in response string n, and di is the difficulty of item i.
Response string n has a raw score of s = Σ xni for i = 1 to 25. Score s has an MLE estimate of Mn = M(s) and a WLE estimate of Wn = W(s).
The expected values of the estimates corresponding to each generating value can now be computed:
Expectation (MLE(θ)) = Σ Pnθ Mn for n = 1 to 2^25
Expectation (WLE(θ)) = Σ Pnθ Wn for n = 1 to 2^25
These values are plotted in Figure 1 for θ in the range -6 logits to +6 logits. The WLE ogive corresponds to the identity line with the generating values for most of its range. The MLE ogive is slightly less central (as predicted by Fred Lord). We can see that the WLE bias correction is effective over the entire range of MLE estimates for non-extreme scores (-4 to +4 logits). The biggest bias correction is 0.23 logits at a generating value of 3.6 logits, as shown in Figure 2. This is less than half the size of the standard error of each estimate which is close to 0.5 logits for most of the range. We can also see that, for "true" generating abilities within 2 logits of the center of the test, the MLE bias is less than 0.1 logits, and so negligible for practical purposes.
Figure 2. Detail of Figure 1 showing MLE bias. |
Similar investigations for tests of length 2 to 24 items demonstrated that the WLE bias correction is effective for tests of 7 dichotomous items or more.
Polytomous Items
We can apply the same logic to Rasch-model polytomous items.
Bias (θ_{MLE}) = - J / ( 2 * I^{2} )
J = = Σ Σ (P'_{θik}P"_{θik})/P_{θik} = Σ( (Σk³P_{θik} ) - 3(Σk²P_{θik} )(ΣkP_{θik} ) + 2(ΣkP_{θik} )³ )
I = Σ( (Σk²P_{θik} ) - (ΣkP_{θik} )² )
where P_{θik} is the Rasch-model probability of a person of ability θ being observed in category k of item i, and the summations are over i=1, L and the polytomous categories k=0,m.
The corrected estimate, θ_{WLE} = θ_{MLE} + (J / ( 2 * I^{2} ) ), which is almost always closer to the item mean than θ_{MLE}.
The results of this investigation are shown in Figure 3 with items 0.1 logits apart, and thresholds 1 logit apart. Using the estimation procedure in Linacre (1998), the results are similar to the findings for dichotomous items in Figure 1.
Warm's bias correction is seen to be efficacious for the correction of MLE bias across the useful measurement range of the items, but MLE bias is also seen to be inconsequential for most practical purposes.
Figure 3. MLE and WLE for 12, 4-category, items. |
John M. Linacre
Linacre J.M. (1998) Estimating Rasch measures with known polytomous (or rating scale) item difficulties: Anchored Maximum Likelihood Estimation (AMLE), RMT 12:2 p. 638
Lord, F. M. (1983). Unbiased estimators of ability parameters, of their variance, and of their parallel-forms reliability. Psychometrika, 48, 2, 233-245.
Warm T.A. (1989) Weighted Likelihood Estimation of Ability in Item Response Theory. Psychometrika, 54, 427-450.
Wright B.D., Douglas G.A. (1996) Estimating measures with known dichotomous item difficulties. Rasch Measurement Transactions, 10:2, 499
Linacre J.M. (2009) The Efficacy of Warm's Weighted Mean Likelihood Estimate (WLE) Correction to Maximum Likelihood Estimate (MLE) Bias, Rasch Measurement Transactions, 2009, 23:1, 1188-9
Rasch Publications | ||||
---|---|---|---|---|
Rasch Measurement Transactions (free, online) | Rasch Measurement research papers (free, online) | Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch | Applying the Rasch Model 3rd. Ed., Bond & Fox | Best Test Design, Wright & Stone |
Rating Scale Analysis, Wright & Masters | Introduction to Rasch Measurement, E. Smith & R. Smith | Introduction to Many-Facet Rasch Measurement, Thomas Eckes | Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. | Statistical Analyses for Language Testers, Rita Green |
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar | Journal of Applied Measurement | Rasch models for measurement, David Andrich | Constructing Measures, Mark Wilson | Rasch Analysis in the Human Sciences, Boone, Stave, Yale |
in Spanish: | Análisis de Rasch para todos, Agustín Tristán | Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez |
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Oct. 9 - Nov. 6, 2020, Fri.-Fri. | On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Jan. 22 -Feb. 19, 2021, Fri.-Fri. | On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
May 21 -June 18, 2021, Fri.-Fri. | On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 25 - July 23, 2021, Fri.-Fri. | On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com |
Aug. 13 - Sept. 10, 2021, Fri.-Fri. | On-line workshop: Many-Facet Rasch Measurement (E. Smith,Facets), www.statistics.com |
June 24 - July 22, 2022, Fri.-Fri. | On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt231d.htm
Website: www.rasch.org/rmt/contents.htm