When low scoring persons are found to succeed more often on a multiple choice item than the performance of high scoring persons would imply, a suspicion of guessing on that item is often aroused. In a plot of percent scores on that item versus scores on the whole test, this situation appears as an ogive the left lower tail of which does not asymptote toward zero but rather to some greater value such as 20%.
It is tempting to accept this empirical asymptote as an estimate of the item's "guessing parameter". The trouble is that the credibility of that estimate requires the assumption that the tendency to guess on this item is entirely a quality of the item which has exactly the same fixed effect on all persons. But we know from personal experience as well as research that persons vary in their tendency to guess. Some persons guess a lot, some a little, and some hardly ever. Research into who guesses usually shows that only a small proportion of persons do much guessing. This means that the asymptote observed depends on who has been sampled at this score level and that the value of this asymptote must be expected to vary from sample to sample. Attempts to extract an item characteristic that will be invariant with sampling are doomed, as studies of item "guessing" parameters show.
What can be done about this problem? We can make an effort to avoid the occurrence of guessing by taking care to keep the items we ask persons at their ability levels. This is what tailored testing attempts, and, when it works, guessing disappears.
There is another approach which can help when we have asked persons an item so difficult for them that we have provoked them into guessing. The particular low-scoring persons, whose lucky guesses on items too hard for them have prevented the left lower asymptote of the item characteristic curve from approaching zero, can be identified and dealt with. We can use their low scores as indications of their low abilities and we can use the item's low score as an indication of its high difficulty. From these data we can calculate the probability that persons of this ability would succeed on items of this difficulty.
Since, by definition, a lucky guess is an unexpected, i.e., improbable, right answer, we have found exactly the persons who have done the lucky guessing and thus interfered with the expected item characteristic curve.
This not only allows us to remove lucky guesses from the data and thus obtain an estimate of item difficulty unspoiled by lucky guessing. It also allows us to find and correct person scores that have become exaggerated by extra right answers due to lucky guessing rather than ability.
Some comments about guessing. Wright BD. Rasch Measurement Transactions 1:2 p.9
Some comments about guessing. Wright BD. Rasch Measurement Transactions, 1988, 1:2 p.9
Please help with Standard Dataset 4: Andrich Rating Scale Model
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|Sept. 15-16, 2017, Fri.-Sat.||IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm|
|Oct. 13 - Nov. 10, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Oct. 25-27, 2017, Wed.-Fri.||In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement|
|Jan. 5 - Feb. 2, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 10-16, 2018, Wed.-Tues.||In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement|
|Jan. 17-19, 2018, Wed.-Fri.||Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website|
|April 13-17, 2018, Fri.-Tues.||AERA, New York, NY, www.aera.net|
|May 25 - June 22, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 27, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 10 - Sept. 7, 2018, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 12 - Nov. 9, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|The HTML to add "Coming Rasch-related Events" to your webpage is:|
The URL of this page is www.rasch.org/rmt/rmt12a.htm