Researchers across disciplines regularly publish articles that investigate the psychometric properties of a survey instrument, commonly referred to as "validation studies". Although researchers seem well-versed in making arguments for the various aspects of construct validity and addressing the technical specifics of their findings, one glaring omission seems predominant in most articles: Researchers fail to address how others can use the results for direct and meaningful comparisons.
The concept of anchoring is certainly nothing new to the measurement community. Likewise, research has long touted that Rasch models produce sample-free calibrations (meaning as long as the predominant latent trait is sufficiently detectable the construct should be defined in both an accurate and stable manner across samples, thus negating the need for representative samples). Despite the Rasch community being well aware of both of these important concepts, rarely are these important concepts extended to their utmost utility.
I argue that instead of simply making the case that one's instrument appears psychometrically sound and encouraging others to adopt it for studies of their own, researchers should consider going a step farther. When researchers are confident that they have defined the construct based on sufficiently unidimensional measures, others may benefit by not only using the same instrument, but also by linking their results onto the same scale for direct comparisons. In order to do this, researchers need to report the rating scale categories with threshold calibrations and item calibrations so that these estimates can serve as anchors for other researchers who wish to bring their measures onto the same scale. This will allow for direct comparisons across administrations of the instrument. Of course, the reverse is true as well. Researchers looking to replicate findings can easily create rating scale and item anchors and bring their sample of respondents onto the same scale as presented in the initial study for direct comparison. In all instances, the concept of exchangeability is taking place and researchers are able to essentially use the same "currency" to investigate findings. Furthermore, when a common currency is available, substantive and theoretical differences and similarities can be better detected, thus potentially advancing the knowledge base within a field at a much quicker rate.
An example might include an instrument that measures mental toughness among collegiate athletes. With appropriate anchoring, members of two sporting teams who have completed the instrument could be compared. These athletes performance in competition could then be coupled with the mental toughness findings to determine the extent to which mental toughness seems to matter in competitive sports. Do people who are identified as having the greatest amount of mental toughness seem to shine in competition, as theory might suggest? Of course, this is just a hypothetical example, but the possibilities are rather endless when one considers the wide array of academic disciplines in which Rasch models are now used.
Of course, there are caveats to this approach. Persons conducting studies of their own need to ensure the instrument is functioning as desired given the particular sample of respondents. Typical quality control checks should be executed upon initial unanchored analyses of the data, as well as after the rating scale threshold and item calibrations have been anchored. Should data fit the model adequately and other indicators suggest the scores are sufficiently reproducible and valid in both scenarios, the suggestion to take findings a step farther could have a number of meaningful consequences for knowledge production and information discernment.
With regard to future directions, we know that the concept of exchangeability is not just something that Rasch advocates value. People from all walks of life also value the simplicity and utility of having common frames of reference. I believe this topic is one that the Rasch community has yet to fully realize in practice, and one that could potentially help others who are uninformed about Rasch models better appreciate their beauty and utility as well.
Kenneth D. Royal
A Suggestion for Taking Rasch-based Survey Results Even Further. Kenneth D. Royal ... Rasch Measurement Transactions, 2012, 25:4, 1341
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Oct. 4 - Nov. 8, 2024, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt254c.htm
Website: www.rasch.org/rmt/contents.htm