MEASUREMENT RESEARCH ASSOCIATES
TEST INSIGHTS
October 2008
Greetings! 
 
A multiple-choice certification exam may have acceptable reliability, but it's likely that it can be improved in the most important area - the precision of candidate ability estimates near the cut score. The article below describes how.
 
Ross Brown
Manager Computer Based Testing and Analysis
Targeting Multiple Choice Examinations for Accurate Outcomes
The purpose of certification examinations is to make accurate pass or fail decisions about candidates. The more information available about the candidates who are close to the cut score, the more accurate the pass/fail decisions are. The Rasch perspective suggests that including more items with difficulties close to the cut score yields the most information about those candidates close to the cut score. This is often called test targeting.
 
On most tests, the item difficulties range from very easy to very difficulty. For test targeting, the items that are too easy and too difficult are replaced with items that have difficulties close to the pass point. These items are those that are answered correctly by between 30% and 80% of candidates. The Wright Map in Figure 1 created by the Winsteps Program (Linacre, 1989), is ideal for tracking item difficulty distribution relative to the pass point. Both candidate measures and item difficulties are on the same scale, so the distribution of candidate measures can be compared to the distribution of item difficulties. In the Wright Map below, the items within the bracket on the right are the targeted items.
 
There are costs to targeting tests. Item writers often find it difficult to write items that perform at this level. As tests become more targeted, they appear more difficult to candidates. But common-item test equating allows more difficult test forms to be developed while maintaining the pass point as an absolute standard. The table shows two examples of tests that were made more targeted. The equating difficulty is the average item difficulty of the test expressed in scaled scores.  When more items are in the target range of .30 to .80, the test becomes more difficult, but the difference in difficulty is accounted for in the analysis. The passing standard is maintained on the more difficult exam forms, but the percent correct necessary to pass decreases as the test becomes more difficult.  Thus, the test is better psychometrically, and does not penalize the candidates.
 

Exam

Percent of exam items with difficulties from 0.30 to 0.80

Equating Difficulty

Passing Standard

Percent Correct Necessary to Pass

Exam 1, Year 1

26%

5.00

5.63

61%

Exam 1, Year 2

47%

5.74

5.63

48%

Exam 2, Year 1

49%

5.00

5.73

65%

Exam 2, Year 2

74%

5.76

5.73

49%

 
Figure 1 Wright Map of Candidates and Items
 

MEASURE                                 |                               MEASURE

  <more> -------------------- Candidate-+- ITEMS   --------------------- <rare>

    2                               .# T+                                   2

                                     #  |T

          More Able Candidates   #####  |  X      Difficult Items

                              .#######  |  XXXX

                              .#######  |  XXXXXX

                              ########  |  XX

                         .############ S|  X

                     .################  |  XXXXXXXXX

                    ##################  |  XXXXX

                       .##############  |  XX

    1      .##########################  +  XXXXXXX                          1

                .#####################  |S XXXXX                |

                .##################### M|  XXXXXX               |

                     .################  |  XXXXXXX              |

                       .##############  |  XXXXXXXXXX           |   Targeted

                      ################  |  XXXXXXXXXX           |    Items

                    .#################  |  XXXXXXXXXXXXXXXX     |

      Cut Score  ­                 .### S|  XXXXXXXXXXXXX        |           

                        .#############  |  XXXXXXXXXXXXXX       |

                               .######  |  XXXXXXXXXXXXXX       |

    0                       ##########  +M XXXXXXXXXX           |            0

                                 .####  |  XXXXXXXXXXX          |

                                   ###  |  XXXXXXXXXX           |

                                  .### T|  XXXXXXXXXXXXX        |

                                   .##  |  XXXXXXXXXXX          |

                                     #  |  XXXXXX

         Less Able Candidates           |  XXXXXXXXXXXX

                                     .  |  XXXXXXXX

                                     .  |  XXXXXXXXX

                                     .  |S XXXXX

   -1                                   +  XXXX                            -1

                                        |  XXXXXXXX

                                        |  XX

                                        |  XXXX

                                        |  XXXXXX

                                        |  XXXX           Easy Items

                                        |  XX

                                        |  X

                                        |  X

                                        |T XXXX

                                        +  XXX                            

                                        |

   -2                                   +                                  -2

  <less> -------------------- Candidate-+- ITEMS   ------------------<frequent>

Measurement Research Associates, Inc.
505 North Lake Shore Dr., Suite 1304
Chicago, IL  60611
Phone: (312) 822-9648     Fax: (312) 822-9650
 


Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou Journal of Applied Measurement
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen
Rasch Books and Publications: Winsteps and Facets
Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Rasch Models for Solving Measurement Problems (Facets), George Engelhard, Jr. & Jue Wang Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:
Please email inquiries about Rasch books to books \at/ rasch.org

Your email address (if you want us to reply):

 

FORUMRasch Measurement Forum to discuss any Rasch-related topic

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com