Equitable Test Equating

Unfortunately, we cannot construct one licensing test to administer to all candidates forever. We must construct a new test every year. But how can we compare candidate performances on different tests or across years? How can we maintain pass-fail standards? We need a practical and defensible method of test equating to make fair comparisons of scores from one test to another. But most equating methods fail to solve the basic equating problems.

1) The Test Length Problem. When Al gets 9 right on test A, while Bob gets 15 right on test B, which score means more? If the tests have the same number of items, Bob's 15 looks better than Al's 9. But if Al's test has only 10 items, while Bob's has 20, then Al's 90% looks better than Bob's 75%. To solve this problem, we could insist that all tests have the same number of items or always use percentages.

2) The Test Difficulty Problem. What if test B is harder than test A? What if the easiest item on test B (where Bob got 75%) is harder than the hardest item on test A (where Al got 90%)? Then even one right answer on Bob's hard test could mean more than 95% on Al's easy test. Now Bob's 75% looks better than Al's 90%.

Does matching item success rates solve this problem? When an item is administered to a sample of candidates, some proportion succeed on the item. This proportion, the item "P-value", describes how easy the item was this time for this sample of candidates. When two items obtain the same P-value from the same sample at the same time, they have appeared equally difficult. We could construct equated tests from pairs of items that have the same P-value obtained from the same sample at the same time. Such perfect matching, however, is impractical and restrictive.

3) The Item Difficulty Distribution Problem. If we cannot match pairs of item P-values, why not just match average P-values of tests? Tests A and B are "equated" to have the same average P-value. Al gets 4 right on test A and Bob gets 4 right on test B. Have Al and Bob done equally well? If Al's test A has 8 items with P-values of .3, and 4 with P-values of .9, but Bob's test B has 12 items with P-values of .5, then both tests have an average P-value of .5. The only reasonable way Al scores 4 on test A, however, is by succeeding on the four easy items with P-values of .9. Bob's 4 successes must come from harder items with P-values of .5. Bob outperforms Al! Equating average item P-values does not equate tests.

4) The Sample Ability Problem. If we do not use the same sample to equate items, then the inevitably different sample ability distributions yield incomparable P-values. Differences in sample distributions, however slight, incapacitate all equating methods that rely on the fiction of identical sample distributions.

5) The Linear Scale Problem. To maintain standards from year to year, the scores from each year's examination must be equated in terms of the ability they represent. To measure growth or change from year to year, the results of each test must also be expressed on a shared scale that keeps the size of its unit the same from beginning to end. No raw score equating methods can provide the linear metric necessary for quantitative comparisons.

6) The Missing Data Problem. Most equating methods require complete data: every candidate must take every test item. This fiction, always compromised in paper-and-pencil testing, is completely insupportable in adaptive testing.

7) The Standard Error Problem. For realistic decision-making, not to mention statistical analysis of the results of testing, we need an estimate of the precision of each item difficulty calibration and of each candidate measure. Reliability coefficients, averaged over all candidates and all items, are deficient because they provide only one average value for the standard error of any score. The actual precision of any score varies widely around this average value and so this value is always incorrect for any particular score.

8) The Quality Control Problem. Are the items in the tests cooperating to measure the same variable in the same way? Are candidates performing in the same way not only across tests, but also across items within tests? The quantitative validity of each measure is the extent to which the pattern of right and wrong answers fits the pattern of item difficulties. Fit statistics reveal items that are not working as intended and candidates whose performance is anomalous. Anomalies can be investigated and prevented from spoiling the equating process. If no check for anomalies is made, however, we have no idea whether our scores are meaningful accumulations or meaningless collections.

Rasch's Equating Invention

In 1953 Danish mathematician Georg Rasch was asked to equate some reading tests. He devised a method for estimating item difficulties entirely free of the effects of the abilities of the candidates who happen to respond to the items. His method also estimates candidate measures entirely free of the effects of the difficulties of whichever items elicit their responses.

Rasch saw that, while he could not determine exactly how a candidate would respond to an item, it should be possible to estimate the candidate's probability of success on that item. He also saw that, for useful results, this probability of a right answer must not be governed by anything except the candidate's ability and the item's difficulty. He deduced this formula:

Rasch's formula is the only practical way to solve the eight equating problems. Data from different tests taken by different candidates can be combined and analyzed together, so long as there is some network of commonalities (candidates and/or items) linking the tests. This combined analysis provides a calibration, standard error and fit statistic for every item, and a measure, standard error and fit statistic for every candidate involved in any of the testings. These item calibrations and candidate measures are completely equated because they are all expressed at once on one common linear scale. Once a bank of items has been calibrated, inclusion of items from the bank into each new test automatically equates that test to the common metric of the bank, and so to all other tests derived from that bank.

An Example of Successful Equating

The University of Chicago Center for School Improvement equated 36 tests covering math from 1st to 8th grades. Each test was administered to a small but relevant fraction of the total sample of 1st to 8th grade pupils. Since the sequence of successively more difficult tests overlapped in the items they contained, Rasch analysis could equate the 36 tests by placing the difficulties of the 3,000 different items and the abilities of 12,000 different pupils on one common linear scale. The results of this one-step (concurrent) equating of the 36 math tests is now applied to 2 million test records collected over 8 years in order to make a quantitative study of individual growth in math from 1st to 8th grade.

Benjamin D. Wright

Equitable test equating. Wright BD. … 1993, 7:2 p.298-9


Equitable test equating. Wright BD. … Rasch Measurement Transactions, 1993, 1993, 7:2 p.298-9



Rasch Books and Publications
Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, 2nd Edn. George Engelhard, Jr. & Jue Wang Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan
Other Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou El modelo métrico de Rasch: Fundamentación, implementación e interpretación de la medida en ciencias sociales (Spanish Edition), Manuel González-Montesinos M.
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

Rasch Measurement Transactions welcomes your comments:

Your email address (if you want us to reply):

If Rasch.org does not reply, please post your message on the Rasch Forum
 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
Apr. 21 - 22, 2025, Mon.-Tue. International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Feb. - June, 2025 On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia
Feb. - June, 2025 On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt72q.htm

Website: www.rasch.org/rmt/contents.htm