Using the CUTLO Procedure to Investigate Guessing

Guessing and receiving unearned credit is a possibility with any multiple-choice examination. Rogers (1999) identified three types of guessing: random, cued, and informed. Random guessing refers to blindly choosing a response to an item. Cued guessing refers to making a response based on some sort of stimulus in a test item, such as wording cues, cues associated with item stems, or choices among the distracters. Informed guessing refers to making a response based on some partial knowledge or on misinformation. One would expect an individual who relies solely on random guessing to have the lowest probability of passing an examination; however, cued guessing and informed guessing would likely increase an individual's chance of passing an examination.

Recently, four non-physicians with doctoral degrees in such areas as clinical psychology, educational psychology, evaluation, and curriculum and instruction attempted to pass the American Board of Family Medicine's (ABFM) certification examination in an attempt to determine how savvy test-takers without medical knowledge or training would fare on the 350-item examination (O'Neill, Royal & Puffer, 2011). As expected, the non-physicians failed miserably. In fact, the failures were so dismal that three of the four non-physicians failed to outscore a single physician (from a pool of 10,818 physicians), and the one non-physician who did outscore physicians only managed to outscore four, two of whom were international medical graduates and two US medical graduates who failed to complete the examination by leaving 33 and 79 items unanswered, resulting in incorrect answers. Even then, it can be argued that the reason the highest-performing non-physician outscored any physician at all is because he has a background in clinical psychology, which likely aided his performance on the ABFM examination as 7% of the test items are classified as psychogenics.

The minimum passing standard for the 2009 certification examination was a scaled score of 390 on a scale of 200-800. The four non-physicians scored below the reportable range with scores of 20, 80, 90, and 160. To investigate the effects of guessing, four physicians who scored 390 were included in the analysis for comparative purposes.

A Guttman (1944) scale of the 50 most unexpected responses (see figure 1) clearly shows that the four non-physicians managed to correctly guess numerous items that they should have answered incorrectly based on their ability estimates. It should be noted that each "1" represents a correct response when an incorrect response was expected, and each "0" represents and incorrect response when a correct response was expected. Each "." represents an expected response.

To further investigate the effects of guessing, the Winsteps CUTLO procedure was applied. CUTLO allows researchers to exclude responses in cases where it is highly probable that guessing could occur, as indicated by a low probability of success. A CUTLO of 2 was used in this analysis, which excluded any items that were 2 or more logits above a participant's ability estimate. Table 1 compares the non-physicians scaled scores both with and without the CUTLO procedure.

Two of the non-physicians' scores fluctuated slightly as a result of the CUTLO procedure, while the other two scores remained relatively stable. The unstable scores for Non-MD3 and Non-MD4 provide evidence that these individuals' scores were actually inflated by the influence of guessing, as these two participants received credit for correctly answering items that were beyond their ability using well-targeted items. While it could be argued that all four non-physicians relied heavily on guessing, it is clear that two of the four relied even more heavily on guessing.

Additional evidence to support this claim is found when subtest scoring is investigated. The two non-physicians with backgrounds in psychology (Non-MD1 and Non-MD2) scored considerably higher in the psychogenics area than the two non-physicians with backgrounds in evaluation and curriculum and instruction (Non-MD3 and Non-MD4). This suggests that two of the non-physicians had some content knowledge of psychogenics or that their responses were based in part on informed guessing. Although the analysis using the CUTLO procedure suggests that there was some guessing going on, overall the Rasch analysis proved to be fairly robust.

Critics of the Rasch model often argue the exclusion of the guessing parameter is a limitation of the model. This is simply not true. In cases like this one, unexpected responses are easily identified and persons who are likely to have guessed can be detected quite well. What to do with the guessed responses, on the other hand, is a separate policy issue. In any instance, the fact remains that valid inferences can be made about who was likely to have guessed without any need for additional model parameterization.

Kenneth D. Royal, Thomas R. O'Neill

The American Board of Family Medicine

Guttman, L. (1944). A basis for scaling qualitative data. American Sociological Review, 9, 139-150.

O'Neill, T. R., Royal, K. D., & Puffer, J. P. (2011). Performance on the American Board of Family Medicine Certification Examination: Are Superior Test Taking Skills Alone Sufficient to Pass? Journal of the American Board of Family Medicine, 24(2), 175-180.

Rogers, H. J. (1999). Guessing in multiple-choice tests. In G. N. Masters and J. P. Keeves (Eds.). Advances in measurement in educational research and assessment. (pp. 23-42) Oxford, UK: Pergamon.

MOST UNEXPECTED RESPONSES
Candidate   Scaled Score  |Item: Easier                                Harder
  MD1         390         |.0..0.............................................
  MD2         390         |....0............................................1
  MD3         390         |..0..............................................1
  MD4         390         |...0..............................................
  Non-MD1     160         |.......................11......1.1.1.111...1....1.
  Non-MD2      90         |.................1...11111.1..1.......1.111..11...
  Non-MD3      80         |0............1..11.111.11.1.1.1.1.....1...1....1..
  Non-MD4      20         |0....11111111111111.11..1..1.1....1.1.1.1...1.....
                          |--------------------------------------------------

Figure 1. Guttman Scalogram of the 50 most unexpected responses.

 

Table 1. Comparing Non-Physicians' Performance by Scaled Scores
 Non-MD1Non-MD2Non-MD3Non-MD4
Regular Analysis169989029
With CUTLO167967514


Using the CUTLO Procedure to Investigate Guessing, Kenneth D. Royal, Thomas R. O'Neill ... Rasch Measurement Transactions, 2011, 251:1, 1319-20




Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt251d.htm

Website: www.rasch.org/rmt/contents.htm