May 2008

The quest to improve reliability of certification examinations is ongoing.  The quality of the items is the basis for educational measurement.  Our observations are that removing poorly performing items (usually poorly written items) from scoring, actually reduces the error of measurement and improves the reliability of the examination.

Mary E. Lunz, Ph.D.

Deleting Items Improves Reliability on Multiple Choice Examinations

The purpose of written certification examinations is to identify the candidates who are qualified to practice effectively.  The mechanism for accomplishing this is usually four or five part multiple choice items.  The quality of the multiple choice items included in an examination is the basis for the reliability or the accuracy of the decisions made about candidate performance.  In classical terms, this means the item should have a good p-value (percent correct) and point biserial correlation.  In Rasch terms it means the difficulty, as well as, the infit and outfit should be within acceptable limits.  Of course, the items must reasonably represent the pertinent content areas in the field of practice. Meeting the criteria for good item performance leads to a lower error of measurement, and more accurate outcomes for candidates.  Candidate separation reliability ((Standard Deviation2 - Standard Error2)/Standard Deviation2) estimates the accuracy of the measured differences among candidate performance.


On items that are good measures, candidates who do well on the total test have the highest probability of answering the item correctly, while candidates who do poorly have the lowest probability of answering the item correctly.  There are many item writing guides that reiterate item writing principles (see Item Development Guidelines at When multiple choice items are well written, they distinguish between more and less knowledgeable candidates, reduce the error of measurement, and consequently lead to a higher candidate separation reliability.


One way to reduce measurement error is to include a sufficient number of items on the examination, at least 100.  The conventional wisdom is that more items decrease the error of measurement and increase reliability. However, after reviewing the data from many examinations, we have found that it takes more than long tests to improve reliability.  The consistency of item content within sections and within the test is critical for good reliability.  Another issue is the statistical performance of the item on the test. Whether item performance is measured with classical statistics or with Rasch IRT, items that do not perform well introduce measurement error and subsequently reduce examination reliability.  In fact, we have found that deleting poorly performing items often increases the reliability of the examination, even though the total number of items decreases.  Some examples that confirm the value of deleting poorly performing items are shown in the Table below.



Number of items before deletion

Reliability of Candidate Separation before item deletion

Number of items after deletion

Reliability of Candidate Separation after item deletion

Exam 1  





Exam 2





Exam 3





Exam 4





Exam 5





Measurement Research Associates, Inc.
505 North Lake Shore Dr., Suite 1304
Chicago, IL  60611
Phone: (312) 822-9648     Fax: (312) 822-9650

Coming Rasch-related Events
Oct. 7 - Nov. 4, 2022, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Nov. 2 - 30, 2022, Wed.-Wed. On-line course: Intermediate/Advanced Rasch Analysis (M. Horton, RUMM2030),
Dec. 1 - 3, 2022, Thur.-Sat. In-person Conference: Pacific Rim Objective Measurement Symposium (PROMS) 2022
Jan. 25 - March 8, 2023, Wed..-Wed. On-line course: Introductory Rasch Analysis (M. Horton, RUMM2030),
June 23 - July 21, 2023, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
Aug. 11 - Sept. 8, 2023, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),