# Catching Copiers: Cheating Detection

For every pair of students (but you could reduce this by pairing only those who could have possibly cheated), plot "count of shared correct responses" against "count of shared incorrect distractors". This will produce a triangular empirical distribution. Outliers away from the triangle have both shared special knowledge and shared special ignorance - a strong indicator that they are not independent.

Unexpectedly similar strings of test responses by a pair of examinees need to be identified and diagnosed. Similarities can be caused by test-wiseness, curriculum effects, clerical errors or even copying.

Copying on a multiple-choice test can result in two very similar examinee response strings. They can also differ because of sloppy copying and copying only in part. Rasch fit statistics can detect similarity between an examinee's responses and the Guttman-pattern of expected responses, but they do not identify similar, non-Guttman, response strings. Both strings can show reasonable Rasch fit. The strings may also produce different ability estimates. How then can we expose response strings which are unacceptably similar?

Sir Ronald Fisher ("Statistical Methods and Scientific Inference" New York: Hafner Press, 1973 p.81) differentiates between "tests of significance" and "tests of acceptance". "Tests of significance" answer hypothetical questions: "how unexpected are the data in the light of a theoretical model for its construction?" Different models give different results. No result is final. Since observing two identical, but reasonable, response strings may be just as likely as observing two different, but reasonable, response strings, arguing for or against copying with a significance test can be inconclusive.

"Tests of acceptance", however, are concerned with whether what is observed meets empirical requirements. Instead of a theoretical distribution, local experience provides the empirical distribution. The "test" question is not "how unlikely are these data in the light of a theory?", but "how acceptable are they in the light of their location in the empirical distribution?"

For MCQ tests we can base our "test of acceptance" on the criterion of too many shared responses (right or wrong). Each response string is compared with every other response string. These pair-wise comparisons build an empirical distribution which describes this occasion completely. Acceptable pairs of performances define the bulk of this distribution. Outliers in the direction of too much similarity become unacceptably similar performances. Outliers in other directions indicate scanning misalignment, guessing, and other off-variable behavior.

 Diagnostic Plot of Unexpected Response Similarities

This "test of acceptance" can be implemented with a simple plot. Imagine 200 items taken by 1,000 examinees. When each examinee is compared with every other examinee there are (1000x999)/2 = 499,500 pair-wise comparisons. The percentage of possible identical responses, same right answers and same wrong options, to the 200 MCQ items will be between 0% (for two entirely differently performing examinees) and 100% (e.g., for two examinees who obtain perfect scores). It is not this percentage that is unacceptable, it is its relationship with the ability levels of the pair of examinees. The ability levels of the examinees can be represented in many ways. A simple approach is to use their average correct score to represent their combined ability levels. The Figure shows such a plot. Virtually all points representing pairs of response strings fall in a triangle, defining the empirically acceptable distribution. The "Outlier" point exposes a pair of examinees with an unusually large percentage of shared responses for their ability level.

"Correct" and "Incorrect" response similarities can be investigated separately. A plot of "same right answers" against average correct score exposes the acceptability of shared "correct" performance. A plot of "same wrong options" against the number of same wrong items (regardless of distractor choice) exposes the acceptability of shared "incorrect" performance.

Since copying is only one cause of highly similar response strings, a non-statistical investigation must also be done. Statistical evidence cannot prove copying. Nevertheless, inspection of this kind of empirical distribution does identify pairs of examinees with response strings so unusually similar that it is unreasonable to accept these strings as produced by the same response process which generated the other response strings.

John M. Linacre

Catching Copiers: Cheating Detection. Linacre J. M. … Rasch Measurement Transactions, 1992, 6:1, 201

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

 Forum Rasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
March 31, 2017, Fri. Conference: 11th UK Rasch Day, Warwick, UK, www.rasch.org.uk
April 2-3, 2017, Sun.-Mon. Conference: Validity Evidence for Measurement in Mathematics Education (V-M2Ed), San Antonio, TX, Information
April 26-30, 2017, Wed.-Sun. NCME, San Antonio, TX, www.ncme.org - April 29: Ben Wright book
April 27 - May 1, 2017, Thur.-Mon. AERA, San Antonio, TX, www.aera.net
May 26 - June 23, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 30 - July 29, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com