Undesirable Item Discrimination

John de Jong constructs second-language listening comprehension tests in English, French and German for the Dutch National Institute for Educational Measurement. Each test item is based on a segment of native speaker spoken language (e.g., an excerpt from German radio). Each listening comprehension test is given to native speakers of the same age as the Dutch students with whom it is to be used. Test results are Rasch analyzed.

In 1987 John showed me analyses of two tests. In one test a listening item was much less discriminating [poorer fit, lower point-biserial] than the other items. John's inspection of this item showed that, in the speech segment for that item, the speaker contradicted himself. This made it difficult ever for native speakers to know what the speaker was trying to say. Since this could explain the item's poor discrimination, John deleted it.

An item which is clear to native speakers, but problematic for non-native speakers, might seem to be an ideal test of second-language listening comprehension. Thus it might be thought that an ideal listening comprehension item would be one that discriminated sharply between native and non-native speakers.

John also showed me an unusually discriminating item [overfit, high point-biserial] from the other test. Native speakers [higher performers overall] did unusually well on this item relative to Dutch students [lower performers overall]. An inspection of the item showed that it was based on a conversation about German politics. The native-speaking (German) students would have an advantage on this item because of their ordinary knowledge of German politics. The high discrimination would be because this item sets Dutch students at a disadvantage unrelated to their knowledge of the German language.

This is an example of an item which is highly discriminating because of its sensitivity to a second irrelevant dimension that is highly correlated with the variable of interest. The contaminating influence of a second dimension often manifests itself in unusual item discrimination. For this reason, John deleted the item.

Both unusually low and unusually high discriminations merit further investigation.

Excerpted from a note to Ben Wright, dated August 1987.

Geoff N. Masters

Further reading:

Masters G.N. 1988. Item discrimination: when more is worse. Journal of Educational Measurement 25:1, 15-29.

Undesirable item discrimination. Masters GN. … 1993, 7:2 p.289

Also see
Journal of Educational Measurement, 25, 1, 15 - March 1988
Item Discrimination: When More Is Worse
Geofferey N. Masters
High item discrimination can be a symptom of a special kind of measurement disturbance introduced by an item that gives persons of high ability a special advantage over and above their higher abilities. This type of disturbance, which can be interpreted as a form of item "bias," can be encouraged by methods that routinely interpret highly discriminating items as the "best" items on a test and may be compounded by procedures that weight items by their discrimination. The type of measurement disturbance described and illustrated in this paper occurs when an item is sensitive to individual differences on a second, undesired dimension that is positively correlated with the variable intended to be measured. Possible secondary influences of this type include opportunity to learn, opportunity to answer, and test wiseness.


    Typical reasons for excessively high item discrimination include:
  1. The item is really two items compressed together, e.g., "Add 2 and 6 then subtract 3. The answer is ...."
  2. The item contains a highly-correlated extra dimension, e.g., a difficult math item with difficult readability, e.g., "At their syzygy, Jupiter, Saturn and Neptune ... what is the distance between Jupiter and Neptune?"
  3. The choice of distractors is poor, e.g., 3 obviously wrong distractors and one correct one.
  4. The stem or answer to one item is a strong clue to the answer to another item.
  5. One item summarizes other. On many surveys, the last item is "Overall, ...." which deliberately summarizes all the other items.


Undesirable item discrimination. Masters GN. … Rasch Measurement Transactions, 1993, 1993, 7:2 p.289

Please help with Standard Dataset 4: Andrich Rating Scale Model



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Oct. 25-27, 2017, Wed.-Fri. In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY, www.aera.net
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src="http://www.rasch.org/events.txt"></script>

 

The URL of this page is www.rasch.org/rmt/rmt72f.htm

Website: www.rasch.org/rmt/contents.htm