"One characteristic of a developed scientific discipline is the availability of proper measurement models and instruments... Neuropsychology has to take up the challenge and has to incorporate these measurement models into its body of assessment procedures... For the vast majority of psychological test procedures in neuropsychology, only the classical test-theory model has been employed, although it is weak in terms of its measurement properties. Item parameters, like item difficulty [p-values] or item discrimination [point biserials], are dependent on the distribution of ability levels in the respective sample used to estimate them. Thus no specifically objective comparisons of both item difficulties and person abilities are possible." (Willmes 1992 p.103, 109, italics his).
Willmes remarks that "In aphasia testing is often useful to require a more differentiated item scoring, expressing different degrees of deviation from a correct solution." He then gives an example of the application of Masters' partial credit model to the "reading aloud" section of the Written Language subtest of the Aachen Aphasia Test (AAT).
He demonstrates how the graphical presentation of item step difficulties both confirms and provokes thought about how aphasic patients respond to the reading items. 378 patients were asked to read aloud the sentence: Why does he want to give it to me? If 3 words or less were read correctly (in any order) the rating was 0. 4-6 words were rated 1. 7-9 words were rated 2, but if all 9 words were read correctly, in the right order, the rating was 3. From these data, Rasch "partial credit model" calibrations were estimated.
How does the linear-appearing categorization of the rating scale actually relate to performance? On inspection of the plot of category probabilities, (reproduced here from his Fig. 2,) Willmes perceives that, since the words are simple, it is plausible that a relatively small increase in ability would advance the patient from little success, a rating of 0, to considerable success, a rating of 2. Perfection, a rating of 3, would be difficult to obtain for an aphasia patient. Hence the much wider zone corresponding to a rating of 2 than a rating of 1. He points out that this information could be used to improve the item. He also notes that the partial credit analysis shows very different step structures for similarly defined 0-1-2-3 scales for other items, demonstrating that similar category definitions do not mandate similar performance intervals.
He concludes his discussion by noting that the Rasch approach facilitates large scale studies of aphasia, because person fit statistics reduce the clinical diagnostic effort by verifying that each of the subjects do indeed exhibit the same pattern of symptoms.
Willmes K. (1992) Psychometric evaluation of neuropsychological test performances. In N. von Steinbüchel, D.Y. von Cramon, E. Pöppel (Eds), Neuropsychological Rehabilitation, p. 103-113. New York: Springer
Neuropsychological test performances. Willmes K. Rasch Measurement Transactions 1994 7:4 p.331
Neuropsychological test performances. Willmes K. Rasch Measurement Transactions, 1994, 7:4 p.331
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|Apr. 14-17, 2020, Tue.-Fri.||International Objective Measurement Workshop (IOMW), University of California, Berkeley, https://www.iomw.org/|
|May 22 - June 19, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 26 - July 24, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 1, 2020, Mon.-Wed.||Measurement at the Crossroads 2020, Milan, Italy , https://convegni.unicatt.it/mac-home|
|July - November, 2020||On-line course: An Introduction to Rasch Measurement Theory and RUMM2030Plus (Andrich & Marais), http://www.education.uwa.edu.au/ppl/courses|
|July 1 - July 3, 2020, Wed.-Fri.||International Measurement Confederation (IMEKO) Joint Symposium, Warsaw, Poland, http://www.imeko-warsaw-2020.org/|
|Aug. 7 - Sept. 4, 2020, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 9 - Nov. 6, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 25 - July 23, 2021, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
The URL of this page is www.rasch.org/rmt/rmt74q.htm