Bad Things Can Happen to a Good Field!

William Fisher points out that fields of scientific research share many features, good and bad. He identifies "The Emperor's New Methods" (Spence et al., 2003) as a cautionary tale for us all. It describes how research decisions are made in the field of genetics. Four thematic pitfalls are identified. Unwittingly, our field may fall into those same traps.

Theme 1: The Most Popular Approach Being Taken as the Only Acceptable One.

This is prone to happen when many new researchers are entering a field. They each ask, "What is the appropriate method?" They are told the most familiar one. An example was the analysis of rating scales. For some 20 years after the introduction of unidimensional polytomous Rasch models, many researchers continued to routinely analyze rating scales by dichotomizing them.

Theme 2: Scientific Practice Based on Myth Rather Than Evidence.

Rasch analysis has its own share of myths. A widely circulated one is that a large sample size is necessary. Another is the supposedly deleterious effect of "significant" misfit, leading to model rejection. Ben Wright's advice was to analyze all the data, then put the noticeably misfitting portion of the data to one side, reanalyze and compare the findings. Rarely was there any noticeable difference. Model fit is not the same as substantive impact.

Theme 3: Willingness to Establish Standards without the Protections of Rigorous Testing.

Spence at al. remark "end users, in general, know little about whether methods are accurately implemented in [computer] programs or how to recognize when the program has failed to give the correct answer. These shoddy standards for validation and calibration of tools almost certainly contribute to a climate in which it is extremely difficult to decide which methods are working and which are not. This deprives us in part of the single most important protective facet of empirical work: the proof should be in the pudding! However, what if one has no definition of what constitutes a palatable pudding?"

We have encountered sometimes humorous examples of this at Conferences across the years. A Presenter would show us an item hierarchy but without any indication of which direction corresponded to "more of the latent variable". Even the Presenter didn't know! Soon the audience would divide into two camps, "The top is more of the variable!" vs. "The bottom is more!", each with good supporting rationalizations. Finally, someone would notice that Appendix 3 of the Paper included a fragment of the original survey instrument. The dispute would be settled, but the audience was left bemused.

Rough prediction of results in advance of analysis is a powerful cross-check on software functioning. Are the sample expected to exhibit much or little of the latent variable? Are they expected to be homogeneous or diverse? What will the general form of the item hierarchy be? Are the rating categories intended to correspond to wide or narrow slices of the variable?

Theme 4: The Unfortunate Development of a "Cult of Personality"

"Reliance of an entire field on the recommendations or prejudices of a handful of individuals has, in the history of science as a whole, proved to be a very poor method of moving closer to the truth." (Spence et al.).

It is annoying to read published papers advocating, but misrepresenting, Rasch methodology. But this is far better than reading a succession of papers parroting the "party line". What is perceived to be a misrepresentation may be a deeper insight or a different perspective. Perhaps even the first step towards the next break-through. Georg Rasch himself perceived progress to lie in a certain direction: "It is to be hoped, however, that ... contributions from others will gradually enlarge the field where fruitful models can be established" (Rasch, 1980, xxi). Happily, this hope continues to be fulfilled. But areas he merely touches upon, such as investigation of construct validity and systematic diagnosis of local misfit, are now prime reasons for the adoption of Rasch techniques. Indeed, it may be that the philosophy of Rasch measurement has greater impact than its mathematics - a phenomenon already witnessed in the work of Newton and Einstein.

Spence M.A., Greenberg D.A. Hodge S.E., Vieland V.J. (2003) The Emperor's New Methods. Am. J. Hum. Genet. 72:1084-1087, 2003.


Bad Things Can Happen to a Good Field! W. Fisher, M.A. Spence et al. … Rasch Measurement Transactions, 2003, 17:1, 917.



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt171l.htm

Website: www.rasch.org/rmt/contents.htm