"Measurement is the Achilles' heel of sociobehavioral research. Although most programs in sociobehavioral sciences ... require a medium of exposure to statistics and research design, few seem to require the same where measurement is concerned ... It is, therefore, not surprising that little or no attention is given to the properties of the measures used in many research studies."
Pedhazur, E.J., & Schmelkin, L.P. (1991) Measurement, Design and Analysis: An Integrated Approach. Hillsdale NJ: Erlbaum. (p. 2-3).
Also quoted in Kieffer K.M. (1999) Why generalizability theory is essential and classical test theory (CTT) is often inadequate. In B. Thompson (Ed.) Advances in Social Science Methodology. Vol. 5. Stamford, CT: JAI Press.
Unfortunately, neither the original authors nor the quoting author appear to realize that an essential property of a useful measure is linearity.
"It was very unfortunate that there was a definite antagonism between [Jerzy] Neyman and [Ronald A.] Fisher. In 1934 Neyman had given his famous paper on sampling methods at the Royal Statistical Society that brought out Fisher's wrath. And this wrath continued at University College [London] during my [Churchill Eisenhart's] time there (1935-37). Fisher's approach to teaching and writing on methods was: I'll tell you what to do, and you leave it up to me what the basic theory is. But then he wouldn't always tell you all the relevant facts of the theory. He would be lecturing, say, on factorial design, and would never mention the importance of additivity. Someone would tell Neyman about this, and in Neyman's next lecture on probability he'd digress and give a bitter discourse on Professor Fisher and his factorial design."
[Who were the faculty members?] "In our group [downstairs] there [were Jerzy Neyman and] B.L. Welch. M.S. Bartlett had been there, but he had left. ... Upstairs with Fisher, there was W.L. (Tony) Stevens, Professor Paul Rider (of Washington University, St. Louis) and Professor George Rasch (Copenhagen). There was some intercourse between the two floors among the students, but you had to change your language when you went from one floor to the other. You would talk about inductive behavior with Neyman. You talked about fiducial inference when you were with Fisher."
Excerpted from Olkin I. (1992) A conversation with Churchill Eisenhart. Statistical Science 7:4 514-5.
Messy data motivate inscrutable models:
Here is Martha Stocking's summary of Item Response Theory, the statistical methodology of Frederic M. Lord (1912-2000). "Building statistical models is just like this. You take a real situation with real data, messy as this is, and build a model that works to explain the behavior of real data." (New York Times, 2-10-2000)
Simple models construct intelligible data:
Georg Rasch summarizes his work as follows: "The concepts of measurement introduced through the definition of the two types of parameters [item difficulty and person ability] differ radically from those employed in the psychometric theory of mental tests - we may, as an instance in point, just mention that in our theory we have no reason for considering a normal distribution of test scores as evidence for equality of units. Our concepts are more akin to psychophysical measurements in so far as these are concerned with individuals, each observed on several occasions. The most conspicuous feature of our concepts, however, seems to be that they, in a certain well defined sense, carry the same conceptual status as mass and force in classical physics." (Probabilistic Models, page 4.)
"[Duns] Scotus's [?-1308] argument for an intelligible species [grouping] was that where an agent acts directly upon an object, together they suffice to produce an effect. ... Ockham [1270-1349] on the other hand used the same argument of the concurrence of agent and object to prove the minor [conclusion] that they suffice to produce intuitive knowledge as the effect without the need of anything else." [Emphasis author's.]
Gordon Leff (1975) William of Ockham: The metamorphosis of scholastic disclosure. Manchester UK: Manchester Univ. Press. p. 35.
"The U.S. health care system is a $1 trillion industry without a definition of its product. Until population outcome measures are developed and rewarded for, we will not solve the twenty-first century challenge of maximizing health outcome management for the resources available."
David A. Kindig (1999). Purchasing population health: Aligning financial incentives to improve health outcomes. Nursing Outlook, 47, 15-22.
Contributed by William P. Fisher, Jr.
"In this example, items 1-12 [of a Quality of Life instrument] are plotted according to calibrated `difficulty'. The fact that they all fall within the 95% statistical control lines (dotted) indicates absence of bias [between the test versions.]" [To draw these plots, see Best Test Design, Wright & Stone, 1979, p. 93-5.]
Cella, D.F., Lloyd, S.R., Wright, B.D. (1996) Cross-cultural instrument equating: current research and future directions. Chapter 73 in B. Spilker (Ed.) Quality of Life and Pharmacoeconomics in Clinical Trials. (2nd. Ed.). Philadelphia: Lippincott-Raven.
"As we use our tools, we constantly remake them. Recent years have seen the remaking of a good many ... tools and the forging of some new ones. Those of us who have participated in this effort ought to feel a bit uneasy. To the extent that our product succeeds ..., it is likely to become another one of those tools that limits subjects for future study and constrains the ways in which those subjects will be studied. Either that or it will continually threaten to undo itself - to undo what we claim to know by questioning the bases on which we claim to know it. In the end we can only hope to be honest in our account of ... the past without, however, restricting ... the future."
Don Michael Randel, President-elect, University of Chicago (1992) The canons in the musicological toolbox. In K. Bergeron & P.V. Bohlman (Eds.) Disciplining Music. Chicago: U. Chicago Press.
Chaos is the natural state of things. Order must be firmly imposed. The "fit of the data to the model", i.e., "statistical control", must be enforced, if data are to guide the future.
"An inference, if it is to have scientific value, must constitute a prediction concerning future data. If the inference is to be made purely with the help of the distribution theory of statistics, the experiments that constitute evidence for the inference must arise from a state of statistical control; until that state is reached, there is no universe, normal or otherwise, and the statistician's calculations by themselves are an illusion if not a delusion. The fact is that when distribution theory is not applicable for lack of control, any inference, statistical or otherwise, is little better than a conjecture. The state of statistical control is therefore the goal of all experimentation."
W. Edwards Deming in W.A. Shewhart, 1939, Statistical Method from the Viewpoint of Quality Control. Washington: Dept. of Agriculture. p. iii.
Since statistical control is not the natural state of things, it must be imposed and then verified. The analyst must be ruthless with the data and Draconian with the process in order to enforce statistical control. Only then can the future be predicted with confidence. Walter Shewhart came to see this during the 1920's during work to improve the quality of mass-produced goods. W. Edwards Deming continued this work into wider fields of business management at all levels.
In social science, researchers are taught that the data are sacrosanct, and that the process must not be altered, or tampered with, during the experiment. In industry and the physical sciences, an aberrant observation prompts immediate investigation and corrective action, even as production and experimentation continue. Social science experiments are not conducted in conditions of statistical process control. Quality-oriented industrial operations are.
What if we discover a child is guessing on a math test, or a patient is responding carelessly to a quality-of-life survey? Then those data are useless for inference. If we really cared about the child or the patient, (at least to the extent that manufacturers care about their products), we would reject the problematic data, retest the child or interview the patient, and change the data-collection instrument or process.
John M. Linacre
Quotations and Notations Rasch Measurement Transactions, 2000, 13:4 p. 715 etc.
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|Oct. 6 - Nov. 3, 2023, Fri.-Fri.||On-line workshop: Rasch Measurement - Core Topics (E. Smith, Facets), www.statistics.com|
|Oct. 12, 2023, Thursday 5 to 7 pm Colombian time||On-line workshop: Deconstruyendo el concepto de validez y Discusiones sobre estimaciones de confiabilidad SICAPSI (J. Escobar, C.Pardo) www.colpsic.org.co|
|June 12 - 14, 2024, Wed.-Fri.||1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024|
|Aug. 9 - Sept. 6, 2024, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
The URL of this page is www.rasch.org/rmt/rmt134b.htm