The Psychopathy Checklist-Revised (PCL-R: Hare, 1991) is a 20-item, 0,1,2 response-category, summated rating instrument, completed by a trained rater either during an interview with the patient or from patient records. Cooke and Michie (1998) have attempted to equate diagnostic cut-off scores across countries using a 2-parameter IRT model.
Premise: Metric Equivalence = Meaning Equivalence
Quote #1: "As noted above, the presence of a common metric was ensured by anchoring the traits together using the 3 `anchor' items; that is, with the items with similar parameters in Scotland and North America. Using regression procedures, it was possible to demonstrate that the North American diagnostic cut-off score of 30 on the PCL-R North America is metrically equivalent to the diagnostic cut-off score of 25 in Scotland." (p. 30)
The cut-off score of 25 is justified on two grounds: "metric equivalence" and "inferred recidivism equivalence" (p. 40). In essence, metric equivalence is associated directly to meaning equivalence, which then permits a UK user to draw upon the extensive body of North American predictive validity results as an evidence-base. This argument, however, requires minimally that the PCL-R in the UK and US cultures is making equivalent measurement of the attribute "psychopathy".
Unfortunately, the use of a 2-parameter IRT model introduces two-dimensional measurement individuals can differ not only in the amount of psychopathy, but perhaps also in something else that affects the linear measurement of the trait such that items can discriminate differentially between individuals over different regions of the trait measure. Basically, some items can discriminate very well between high and low-scorers on the trait, whereas other items barely discriminate at all. The issue here is that, if only one "thing" (e.g., psychopathy) is being measured by the items, then all items must, by definition, discriminate equally across the range of the trait. For the only way in which individuals can differ from one another is in how much or how little they possess of the trait because the trait is being measured using equal-interval, additive measurement units. If the unit concatenation operation (arithmetic additivity) remains constant over the range of the trait, then items cannot differ in terms of their discrimination, because this requires that the unit of measurement changes its property over the range of the scale, such that addition of units no longer the fulfills at least the associativity axiom (Michell, 1990, p. 52) over all unit magnitudes on the scale.
Let's put this in terms of the measurement of length that is, let us imagine that we are measuring length instead of psychopathy. We have a ruler (the trait scale = the psychopathy latent trait) that measures units of length. We measure objects with this ruler. We can then position these objects against our measurement scale (length) in order of magnitude of length (magnitude of psychopathy). The units (mm) on our ruler do not vary in width depending upon where they are on our ruler (the lower range or upper range). So each object's position, relative to every other object measured with our ruler, can be defined using linear, arithmetic, concatenation of the fundamental unit of measurement, the mm (or what we assume is a fundamental unit of psychopathy). But, if for some reason, we use a ruler whose mm units vary in width over the range of the scale of measurement, we can see that some objects are likely to be given the same length measurement because two or more objects of actually different length are now falling within a same "stretched" unit (measuring to the nearest mm). If we systematically stretch our units at the low end of our ruler, and compress them to near unit-width equality at our high end, then for objects of low real length (low scorers on PCL-R?) we can barely discriminate between them using our "trait" ruler. For high scorers, we will be making much sharper discrimination. However, you can now see what has happened to our assumed "metric" unidimensional measurement in order to achieve this ersatz "measurement". The amount of measured trait in an object is varying not only as a function of the amount of trait it possesses, but is also varying in relation to the relative position of the magnitude on the measurement scale. In short, objects fail to be discriminated solely as a function of the amount of trait they "possess". This does not give one any confidence in attempting to linearly "equate" trait scores between the North American and UK samples.
Quote #2: "However, it was found that when the a (discrimination) parameters alone were constrained to be equal [across countries], then the model fitted well. This indicated that the a parameters were essentially equal and that items discriminate as well in Scotland as they do in North America. However, the variation in the b (difficulty, facility) parameters revealed that the level of the underlying trait at which the characteristics of the disorder come apparent, differed in the two settings." (p. 28)
What we have in Quote #2 is an admission that the unit-width discrepancies are, in fact, near equal in both latent trait scales, but that the amount of trait in each item differs between the two cultures. This is equivalent to saying that we are using the same unequal width mm unit ruler in the two cultures, but that in the UK, all units on the ruler are also shortened by some constant factor such that the two rulers differ in overall length. The shortening is now assumed to be a perfect linear function of the North American ruler hence, Cook and Michie can then equate a score of 30 on the US "psychopathy ruler" to 25 on the UK one.
The problem here is that we have now lost all connection with measurement, and are in some strange land where units not only change their width as a function of magnitudes, but now also linearly change their unequal widths as a function of culture. Surely, the only sensible and honest way to use the PCL-R raw or IRT "scores" are as ordinal magnitudes, where only ordinal relations between different magnitudes hold.
So, I would conclude that the "metric equivalence" argument justifying the simple subtraction of 5 score points from a North American cut-off score is flawed to some unknown degree, whilst predicated on an IRT model that uses two parameters to make ostensibly unidimensional measurement.
Finally, I have ignored the a priori specification of the meaning of psychopathy, and its rules for instantiation in the above. I note Salekin et al. (1996) refer to the PCL-R as a polythetic model with "more than 15,000 possible variations of psychopathy for scores equal to or greater than 30 (Rogers, 1995)". This test badly needs a Rasch model analysis to help sort out both its measurement and its supposed "polythetic" nature! This "polythetic" adjective seems more an excuse for clinicians' unwillingness to think clearly about the meaning instantiation and subsequent measurement of their constructs than a serious, meaningful, construct-definitional adjective.
Paul Barrett, The State Hospital (Carstairs), and University of Liverpool, UK
Cooke, D.J., Michie, C. (1998) Psychopathy across cultures. In Cooke, D.J., Forth, A.E., and Hare, R.D. (Eds.). Psychopathy: Theory, Research, and Implications for Society. Kluwer Academic Pub.
Hare, R. (1991) Hare Psychopathy Checklist, Revised. New York: Multi-Health Systems Inc.
Michell, J. (1990) An Introduction to the Logic of Psychological Measurement. Lawrence Erlbaum.
Rogers, R. (1995) Diagnostic and Structured Interviewing. Odessa, Fl: Psychological Assessment Resources.
Salekin, R.T., Rogers, R., Sewell, K.W. (1996) A review and meta-analysis of the Psychopathy Checklist and Psychopathy Checklist-Revised: predictive validity of dangerousness. Clinical Psychology: Science and Practice, 3, 3, 203-215
Test-equating based upon false premises. Barrett P. Rasch Measurement Transactions, 2000, 14:1 p.732
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|Jan. 18 - 19, 2019, Fri.-Sat.||In-person workshop, Munich, Germany: Introduction to Rasch Measurement With Winsteps (William Boone, Winsteps), email@example.com|
|Jan. 25 - Feb. 22, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 28, 2019, Mon.||On-line course: Understanding Rasch Measurement Theory (ACER), https://www.acer.org/professional-learning/postgraduate/Rasch|
|Feb. 4 - 7, 2019, Mon.-Thur.||RUMM-based Rasch Workshop (in Italian), Bologna, Italy,https://mailinglist.acer.edu.au/pipermail/rasch/attachments/20190114/de6886f8/attachment.pdf|
|March 21, 2019, Thur.||13th annual meeting of the UK Rasch user group, Cambridge, UK, http://www.cambridgeassessment.org.uk/events/uk-rasch-user-group-2019|
|April 4 - 8, 2019, Thur.-Mon.||NCME annual meeting, Toronto, Canada,https://ncme.connectedcommunity.org/meetings/annual|
|April 5 - 9, 2019, Fri.-Tue.||AERA annual meeting, Toronto, Canada,www.aera.net/Events-Meetings/Annual-Meeting|
|May 24 - June 21, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 28 - July 26, 2019, Fri.-Fri.|
|July 11-12 & 15-19, 2019, Thu.-Fri.||A Course in Rasch Measurement Theory (D.Andrich), University of Western Australia, Perth, Australia, flyer - http://www.education.uwa.edu.au/ppl/courses|
|Aug. 9 - Sept. 6, 2019, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 11 - Nov. 8, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 24 - Feb. 21, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|May 22 - June 19, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 26 - July 24, 2020, Fri.-Fri.|
|Aug. 7 - Sept. 4, 2020, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 9 - Nov. 6, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 25 - July 23, 2021, Fri.-Fri.|
The URL of this page is www.rasch.org/rmt/rmt141c.htm