Researchers across disciplines regularly publish articles that investigate the psychometric properties of a survey instrument, commonly referred to as "validation studies". Although researchers seem well-versed in making arguments for the various aspects of construct validity and addressing the technical specifics of their findings, one glaring omission seems predominant in most articles: Researchers fail to address how others can use the results for direct and meaningful comparisons.
The concept of anchoring is certainly nothing new to the measurement community. Likewise, research has long touted that Rasch models produce sample-free calibrations (meaning as long as the predominant latent trait is sufficiently detectable the construct should be defined in both an accurate and stable manner across samples, thus negating the need for representative samples). Despite the Rasch community being well aware of both of these important concepts, rarely are these important concepts extended to their utmost utility.
I argue that instead of simply making the case that one's instrument appears psychometrically sound and encouraging others to adopt it for studies of their own, researchers should consider going a step farther. When researchers are confident that they have defined the construct based on sufficiently unidimensional measures, others may benefit by not only using the same instrument, but also by linking their results onto the same scale for direct comparisons. In order to do this, researchers need to report the rating scale categories with threshold calibrations and item calibrations so that these estimates can serve as anchors for other researchers who wish to bring their measures onto the same scale. This will allow for direct comparisons across administrations of the instrument. Of course, the reverse is true as well. Researchers looking to replicate findings can easily create rating scale and item anchors and bring their sample of respondents onto the same scale as presented in the initial study for direct comparison. In all instances, the concept of exchangeability is taking place and researchers are able to essentially use the same "currency" to investigate findings. Furthermore, when a common currency is available, substantive and theoretical differences and similarities can be better detected, thus potentially advancing the knowledge base within a field at a much quicker rate.
An example might include an instrument that measures mental toughness among collegiate athletes. With appropriate anchoring, members of two sporting teams who have completed the instrument could be compared. These athletes performance in competition could then be coupled with the mental toughness findings to determine the extent to which mental toughness seems to matter in competitive sports. Do people who are identified as having the greatest amount of mental toughness seem to shine in competition, as theory might suggest? Of course, this is just a hypothetical example, but the possibilities are rather endless when one considers the wide array of academic disciplines in which Rasch models are now used.
Of course, there are caveats to this approach. Persons conducting studies of their own need to ensure the instrument is functioning as desired given the particular sample of respondents. Typical quality control checks should be executed upon initial unanchored analyses of the data, as well as after the rating scale threshold and item calibrations have been anchored. Should data fit the model adequately and other indicators suggest the scores are sufficiently reproducible and valid in both scenarios, the suggestion to take findings a step farther could have a number of meaningful consequences for knowledge production and information discernment.
With regard to future directions, we know that the concept of exchangeability is not just something that Rasch advocates value. People from all walks of life also value the simplicity and utility of having common frames of reference. I believe this topic is one that the Rasch community has yet to fully realize in practice, and one that could potentially help others who are uninformed about Rasch models better appreciate their beauty and utility as well.
Kenneth D. Royal
A Suggestion for Taking Rasch-based Survey Results Even Further. Kenneth D. Royal ... Rasch Measurement Transactions, 2012, 25:4, 1341
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|March 21, 2019, Thur.||13th annual meeting of the UK Rasch user group, Cambridge, UK, http://www.cambridgeassessment.org.uk/events/uk-rasch-user-group-2019|
|April 4 - 8, 2019, Thur.-Mon.||NCME annual meeting, Toronto, Canada,https://ncme.connectedcommunity.org/meetings/annual|
|April 5 - 9, 2019, Fri.-Tue.||AERA annual meeting, Toronto, Canada,www.aera.net/Events-Meetings/Annual-Meeting|
|April 12, 2019, Fri.||On-line course: Understanding Rasch Measurement Theory - Master's Level (G. Masters), https://www.acer.org/au/professional-learning/postgraduate/rasch|
|May 24 - June 21, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|May 22 - 30, 2019, Wed.-Thu.||Measuring and scale construction (with the Rasch Model), University of Manchester, England, https://www.cmist.manchester.ac.uk/study/short/intermediate/measurement-with-the-rasch-model/|
|June 17-19, 2019, Mon.-Wed.||In-person workshop, Melbourne, Australia: Applying the Rasch Model in the Human Sciences: Introduction to Rasch measurement (Trevor Bond, Winsteps), Announcement|
|June 20-21, 2019, Thurs.-Fri.||In-person workshop, Melbourne, Australia: Applying the Rasch Model in the Human Sciences: Advanced Rasch measurement with Facets (Trevor Bond, Facets), Announcement|
|June 28 - July 26, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|July 2-5, 2019, Tue.-Fri.||2019 International Measurement Confederation (IMEKO) Joint Symposium, St. Petersburg, Russia,https://imeko19-spb.org|
|July 11-12 & 15-19, 2019, Thu.-Fri.||A Course in Rasch Measurement Theory (D.Andrich), University of Western Australia, Perth, Australia, flyer - http://www.education.uwa.edu.au/ppl/courses|
|Aug 5 - 10, 2019, Mon.-Sat.||6th International Summer School "Applied Psychometrics in Psychology and Education", Institute of Education at HSE University Moscow, Russia.https://ioe.hse.ru/en/announcements/248134963.html|
|Aug. 9 - Sept. 6, 2019, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|August 25-30, 2019, Sun.-Fri.||Pacific Rim Objective Measurement Society (PROMS) 2019, Surabaya, Indonesia https://proms.promsociety.org/2019/|
|Oct. 11 - Nov. 8, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 24 - Feb. 21, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|May 22 - June 19, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 26 - July 24, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 7 - Sept. 4, 2020, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 9 - Nov. 6, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 25 - July 23, 2021, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
The URL of this page is www.rasch.org/rmt/rmt254c.htm