Resolving the attenuation paradox

"In general, there appears to be no reason that measures of reliability and validity should be monotone increasing functions of each other" (Sitgreaves, 1961)

Up to a point, reliability and validity increase together, but then any further increase in reliability decreases validity. This is the attenuation paradox (RMT 6(4) p. 257, RMT 7(2) 294). The attenuation paradox appears most clearly in the context of item selection and test construction. In practice, the problem is how to select those items that will simultaneously increase both the reliability and validity of the total test scores.

From the perspective of Rasch measurement, there is a simple solution to the attenuation paradox. Useful invariant measurement require items to have similar discrimination and stochasticity, but different difficulties. The elimination of both low and high discriminating items (Andrich, 1988) maximizes validity, while optimizing reliability.

Since classical test theory (CTT) focuses primarily on total scores, there are no unambiguous guidelines for accomplishing this goal. Even making the item, rather than total score, the unit of analysis does not resolve the attenuation paradox, because the elimination of highly discriminating items goes against "conventional wisdom" for many psychometricians trained in the CTT tradition. "Discarding the most as well as the least discriminating items also goes against one's instincts in test construction" (Cliff, 1989, p. 77). Within CTT, the higher the discrimination index, the better the item (Ebel, 1979). Consequently, a more palatable solution for scientists trained in the CTT tradition is to attempt control of variation in item discrimination by including another item parameter in the model.

The inclusion of an item-discrimination parameter in Birnbaum's two-parameter model reflects the historical influences of the CTT tradition on modern IRT, so, despite the attenuation paradox, ideas from CTT still influence measurement practice. The two-parameter IRT model attempts a statistical adjustment of test scores to account for variability in item discrimination. This is thought to resolve the paradox. But the price for maintaining a commitment to an antiquated concept of item quality is that the two-parameter model produces ordinal scales rather than interval measures (Cliff, 1989). Nevertheless many "modern" psychometricians still refuse to accept the implications of the attenuation paradox for modern measurement theory and practice.

Rasch measurement, on the other hand, sets out clear guidelines for test construction that lead to the elimination of items with extreme discrimination parameters. This resolves the attenuation paradox, and provides the opportunity to obtain interval scales by bringing the data into conformity with the Rasch model. In practice, however, test constructors should not eliminate items with no further thought. We should explore why some items are more or less discriminating. Masters (1988) presents a compelling case for viewing item discrimination as a type of item bias that may be the result of individual differences related to opportunity to learn, opportunity to answer, and test-wiseness.

Professor George Engelhard, Jr.
Emory University
Division of Educational Studies
Atlanta, GA 30322

Andrich, D. (1988, April). A scientific revolution in social measurement. Paper presented at the annual meeting of the American Educational Research Association, New Orleans.

Cliff, N. (1989). Ordinal consistency and ordinal true scores. Psychometrika, 54(1), 75-91.

Ebel, R.L. (1979). Essentials of educational measurement. Englewood cliffs, NJ: Prentice-Hall.

Masters, G.N. (1988). Item discrimination: when more is worse. Journal of Educational Measurement, 25(1), 15-29.

Sitgreaves, R. (1961). A statistical formulation of the attenuation paradox in test theory. In H. Solomon (Ed.), Studies in item analysis and prediction (p. 17-28). Stanford, CA: Stanford University Press.

Resolving the attenuation paradox. Engelhard G Jr. … Rasch Measurement Transactions, 1994, 8:3 p.379

Please help with Standard Dataset 4: Andrich Rating Scale Model

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from welcomes your comments:

Your email address (if you want us to reply):


ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website,

Coming Rasch-related Events
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago,
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Oct. 25-27, 2017, Wed.-Fri. In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY,
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src=""></script>


The URL of this page is