Short, cheap screening tests are used to tell which examinees must take time-consuming, expensive tests. A good screening test distributes resources efficiently. A poor screening test wastes examinee time and lowers testing efficiency.
Colliver, Vu and Barrows (CVB), in AERA (Division I) award-winning research, evaluate the screening effectiveness of a standardized patient (SP) examination for medical clerkship by means of Receiver Operating Characteristic (ROC) signal detection. All CVB students took the 3 day examination of 18 SP cases. CVB consider using the first day to screen the next two days. They want to minimize testing effort for clear passes, so they investigate how results differ when a pass on the first day is accepted as a pass on all three days, but a fail on the first day requires the next two days. CVB's "true positive" success rate is the proportion of examinees passing the full examination who pass the screening test. The higher the "true positive" rate, the more resources are saved by the screening test. Their "false positive" failure rate is the proportion of examinees failing the full examination who pass the screening test. The lower the "false positive rate", the fewer unqualified students are passed.
Empirical "true positive" and "false positive" rates for screening tests of 2, 4, 6, 8, and 10 standardized patients are reported by CVB in ROC format (ROC Rate % plot). Each curve describes the ROC of a screening test. The five points along each curve mark 5 pass-fail cut-points. From lower left to upper right of each curve, these cut-points range from one standard error of measurement (SEM) above the mean pass-fail level of the SP cases in the screening test to one SEM below. The center point marks a cut-point at the mean. ROC curves are interpolated by straight lines between these five points.
In principle, the closer a ROC curve is to the top left, the better the screening test, i.e., longer tests are better. The best cut-point for a test is a point near the top left, where the "true positive" rate is much greater than the "false positive" rate. This is usually close to the cut-point corresponding to 0.5 SEM above the screening test's mean SP pass-fail level.
Unfortunately, ROC curves are in a non-linear % metric. Consequently it is difficult to "measure" distances between ROC curves or to calculate how much "nearness" costs in terms of extra SP cases administered. The ROC curves, however, can be linearized by converting them to log-odds (Log-Odds plot). Now the ROC curves are seen to be empirical manifestations of parallel straight lines that relate success on the screening test to success on the full test in a simple way. The unequal spacing of the 5 cut-points across tests exposes the arbitrariness of cut-points based on SP (or item) distribution.
The line nearest top left is the "best" test, and the "best" cut-point has not changed. Nevertheless, the misleading geometry of the ROC plot is exposed. Now there are many reasonable "nearness" rules. Any diagonal line, top-left to bottom-right, could suffice.
The ROC rule specifies that credit for correct selection and debit for incorrect selection are equal. But this is seldom so. For the screening test that CVB describe, the penalty for a "false positive" (passing an incompetent examinee) is much greater than the benefit for a "true positive" (shorter test administration for a competent examinee). Otherwise the test would long since have been shortened. An Examination Board's preferred trade-off between debits and credits can be expressed by "best" cut-point contours on either plot. But the log-odds plot has important advantages over the ROC plot: 1) Screening test lines are parallel, straight and easily extrapolated. 2) Reasonable lines for screening tests of other lengths are simple to interpolate between the existing lines. 3) Alternative "nearness" rules can be expressed and evaluated as simple straight lines. One is shown for when the Board specifies that the penalty for passing an unqualified candidate is twice the benefit for passing a qualified candidate on the screening test.
John Michael Linacre
Colliver JA, Vu NV, Barrows HS. Screening test length for sequential testing with a standardized-patient examination: a Receiver Operating Characteristic (ROC) analysis. Academic Medicine, 67(9) 592-595, September 1992
Evaluating a ROC screening test. Linacre JM. Rasch Measurement Transactions 1994 7:4 p.317-8
Evaluating a screening test. Linacre JM. Rasch Measurement Transactions, 1994, 7:4 p.317-8
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|Sept. 27-29, 2017, Wed.-Fri.||In-person workshop: Introductory Rasch Analysis using RUMM2030, Leeds, UK (M. Horton), Announcement|
|Oct. 13 - Nov. 10, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Oct. 25-27, 2017, Wed.-Fri.||In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement|
|Dec. 6-8, 2017, Wed.-Fri.||In-person workshop: Introductory Rasch Analysis using RUMM2030, Leeds, UK (M. Horton), Announcement|
|Jan. 5 - Feb. 2, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 10-16, 2018, Wed.-Tues.||In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement|
|Jan. 17-19, 2018, Wed.-Fri.||Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website|
|April 13-17, 2018, Fri.-Tues.||AERA, New York, NY, www.aera.net|
|May 25 - June 22, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 27, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 10 - Sept. 7, 2018, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 12 - Nov. 9, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
The URL of this page is www.rasch.org/rmt/rmt74a.htm