The Efficacy of Warm's Weighted Mean Likelihood Estimate (WLE) Correction to Maximum Likelihood Estimate (MLE) Bias

Thomas Warm (1989) reports that "Lord (1983) found that maximum likelihood estimates of θ [person ability] are biased outward" and then he restates Lord's expression for the size of this bias:

Bias (θMLE) = - J / ( 2 * I2 )

where, for dichotomous Rasch items,
J = Σ Pθi (1-Pθi ) (1-2Pθi )
I = Σ Pθi (1-Pθi )
summed for all items, i = 1,L in the test, where Pθi is the Rasch-model probability of success of ability θ on item i.

The corrected estimate, θWLE = θMLE + (J / ( 2 * I2 ) ), which is almost always closer to the item mean than θMLE.

How effective is this bias correction? Warm uses a Monte Carlo study to demonstrate its effectiveness, but an exact algebraic investigation can be conducted.

Dichotomous Items

I posited a test of 25 items, with its item difficulties uniformly spaced 0.2 logits apart. Figure 1 shows the locations (x-axis) of the items on the 25-item test. The item difficulties are centered on 0 logits.

Applying the MLE method of Wright & Douglas (1996) for estimating θ from known item difficulties, a Rasch ability estimate, M(s) is obtained for each possible raw score, s=0-25, on the test of 25 items. Since the estimates corresponding to s=0 and s=25 are infinite, they are substituted by estimates corresponding to s=0.3 and s=24.7 score-points. The MLE ability estimates are shown in Figure 1.

Figure 1. MLE and WLE for 25 dichotomous items.

Warm's bias correction is applied to each MLE estimate, M(s), to produce a Warm's Mean Likelihood Estimation (WLE) value, W(s). See Figure 1. WLE estimates are more central than the MLE estimates, except for estimates corresponding to scores of 0.3 and 24.7, where the MLE estimates are used unchanged.

Under Rasch model conditions, each raw score, s, on a given set of items, corresponds to one estimated ability θ(s), but each true (generating) ability corresponds to all possible raw scores. For 25 items, there are 2^25 = 33,554,432 possible different response strings. According to the Rasch model, each of these response strings has a finite probability of being observed for each generating ability.

Probability of response string n for ability θ = Pnθ = Π exp( (xni (θ - di) ) / (1 + exp(θ - di) )

for i = 1 to 25, where xni is the scored 0,1 response to item i in response string n, and di is the difficulty of item i.

Response string n has a raw score of s = Σ xni for i = 1 to 25. Score s has an MLE estimate of Mn = M(s) and a WLE estimate of Wn = W(s).

The expected values of the estimates corresponding to each generating value can now be computed:

Expectation (MLE(θ)) = Σ Pnθ Mn for n = 1 to 2^25

Expectation (WLE(θ)) = Σ Pnθ Wn for n = 1 to 2^25

These values are plotted in Figure 1 for θ in the range -6 logits to +6 logits. The WLE ogive corresponds to the identity line with the generating values for most of its range. The MLE ogive is slightly less central (as predicted by Fred Lord). We can see that the WLE bias correction is effective over the entire range of MLE estimates for non-extreme scores (-4 to +4 logits). The biggest bias correction is 0.23 logits at a generating value of 3.6 logits, as shown in Figure 2. This is less than half the size of the standard error of each estimate which is close to 0.5 logits for most of the range. We can also see that, for "true" generating abilities within 2 logits of the center of the test, the MLE bias is less than 0.1 logits, and so negligible for practical purposes.

Figure 2. Detail of Figure 1 showing MLE bias.

Similar investigations for tests of length 2 to 24 items demonstrated that the WLE bias correction is effective for tests of 7 dichotomous items or more.

Polytomous Items

We can apply the same logic to Rasch-model polytomous items.

Bias (θMLE) = - J / ( 2 * I2 )

J = = Σ Σ (P'θikP"θik)/Pθik = Σ( (Σk³Pθik ) - 3(Σk²Pθik )(ΣkPθik ) + 2(ΣkPθik )³ )

I = Σ( (Σk²Pθik ) - (ΣkPθik )² )

where Pθik is the Rasch-model probability of a person of ability θ being observed in category k of item i, and the summations are over i=1, L and the polytomous categories k=0,m.

The corrected estimate, θWLE = θMLE + (J / ( 2 * I2 ) ), which is almost always closer to the item mean than θMLE.

The results of this investigation are shown in Figure 3 with items 0.1 logits apart, and thresholds 1 logit apart. Using the estimation procedure in Linacre (1998), the results are similar to the findings for dichotomous items in Figure 1.

Warm's bias correction is seen to be efficacious for the correction of MLE bias across the useful measurement range of the items, but MLE bias is also seen to be inconsequential for most practical purposes.

Figure 3. MLE and WLE for 12, 4-category, items.

John M. Linacre

Linacre J.M. (1998) Estimating Rasch measures with known polytomous (or rating scale) item difficulties: Anchored Maximum Likelihood Estimation (AMLE), RMT 12:2 p. 638

Lord, F. M. (1983). Unbiased estimators of ability parameters, of their variance, and of their parallel-forms reliability. Psychometrika, 48, 2, 233-245.

Warm T.A. (1989) Weighted Likelihood Estimation of Ability in Item Response Theory. Psychometrika, 54, 427-450.

Wright B.D., Douglas G.A. (1996) Estimating measures with known dichotomous item difficulties. Rasch Measurement Transactions, 10:2, 499

Linacre J.M. (2009) The Efficacy of Warm's Weighted Mean Likelihood Estimate (WLE) Correction to Maximum Likelihood Estimate (MLE) Bias, Rasch Measurement Transactions, 2009, 23:1, 1188-9

Please help with Standard Dataset 4: Andrich Rating Scale Model

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from welcomes your comments:

Your email address (if you want us to reply):


ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website,

Coming Rasch-related Events
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil,
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia,
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia.
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan,
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago,
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Oct. 25-27, 2017, Wed.-Fri. In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY,
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src=""></script>


The URL of this page is