Science progresses by dialogue between the inner world of abstract ideas and the outer word of concrete experience. Ideas are hypothetical guidelines. Experience brings them to life.
Experience is tangible. But it needs ideas to become useful. Raw experience is chaotic. Guidelines are necessary to organize perceptions of reality, to make them recognizable, and to make some sense of them.
Science is the conversation between ideas and experience. We require ideas to recognize reality. We require experience to nudge ideas into new shapes. This dialogue is the relationship between ideas and experience out of which all knowledge evolves. This is the method of science. Science is not facts but method, not data but interpretation of data, not just experience but also idea.
The scientific tasks of psychometrics are pursued by working through a dialogue of five successive procedural models. Articulating these models enables focus and reduces confusion. Explicating a model for each step focuses attention on the specific problem addressed at that step.
Model Step | Numerical Level | Primary Activity |
---|---|---|
1. Observing | Nominal | Determining what to observe and what to overlook |
2. Scoring | Ordinal | Determining which ordinal scoring of the observations categories provides the most informative comparisons. |
3. Measuring | Interval | Calibrating items, measuring persons, and evaluating fit, and so defining the construct. |
4. Analyzing | Relational | Investigating the relationships among measures and tracking processes. |
5. Applying | Practical | Applying results back to the initial problems and forward to new ones. |
1. The Observing Model sets the standards for data production and established the first level of quality control. When Isherwood (1939) imagines, "I am a camera with its shutter open, quite passive, recording, not thinking," he supposes that we are able to observe without thinking. But to observe and record is to think about what to observe and what to record. Isherwood's camera is pointed in a direction, has a focal length, and responds to a particular wave band. We are unable merely to observe.
To make observations is to select what to attend. Even more important, to observe is to select what to not attend. Data does not exist of itself. It must be conceived and produced. To produce data requires deciding what to address, when and how. Data is not found. It is anticipated in imagination and constructed in action. Like any product, its manufacture needs quality control.
Quality control over data production means continuous monitoring. The Observing Model nominates the qualities to be sought and recorded. It specifies what is to be looked for and what is to be counted.
In psychometrics, the response requirements for collecting data influence item difficulty, e.g. double negatives, reversals, conditionals. Sometimes item intention is so overwhelmed by an ambiguous response process that we do not obtain a reproducible item difficulty, only a variety of local difficulties provoked by interactions between the eccentricities of respondents and the peculiarities of the item.
2. The Scoring Model addresses the observed data as though it were nothing but comparisons of ordered alternatives like 0/1 for dichotomous responses and 0/1/2/3 ... for ranks and rating scales. The stochastic model by which these data are given inferential meaning is based on defining transition odds for successive categories in terms of a few conjointly additive parameters. To proceed in this direction it is necessary to determine which of the various ordinal scorings the particular categories offer is most useful for measurement, which scoring format provides the most information. The Scoring Model chosen specifies the more and less comparisons used to infer measurement from the observations (Wright & Stone, 1996).
3. All useful Measuring Models, however complicated appearing, reduce to a Rasch formulation, an equation connecting logs of category transition odds for observable events to conjointly additive parameters designed to explain these odds (Rasch, 1960, 1980; Wright & Stone, 1979). The stochastic model interprets the data as instances of probabilities.
The simplest Rasch model identifies parameters Bn for person ability and Di for item difficulty. Their difference ( Bn - Di ) is defined to govern the probability of what is expected to happen when person n uses their ability Bn against the difficulty Di of item i. The data are interpreted as independent of the distribution of the other Bn, and the measures of Bn are independent of the distributions of Di. The log-odds function establishes a linear scale and the parameter separation establishes generality (Wright & Stone 1979).
a) Person separation indicates the ability of the items to separate measures of these persons.
b) Item separation indicates the degree to which a variable has been defined by these persons.
c) Item fit evaluates the relevance of each item to the conjoint variable.
d) Person fit evaluates the validity of each person measure and directs response diagnosis.
4. The Analyzing Model cannot do its work until we have satisfied the requirements of the first three models. Premature analysis of data mistaken for measures without taking into account and using the models for observing, scoring and measuring only confounds results with an inextricable mass of unidentified and uncontrolled interactions.
In the Analyzing Model we study process and relation. We determine the implications of the measures we derive from our observations and investigate how these measures relate to other variables also measured by applying the observing, scoring and measuring models.
5. The Applying Model follows analysis of the measures constructed from observations. In this step, we apply the results obtained to the problems that initially provoked our investigations and also to new situations. The Applying Model brings the prior steps into focus and use. It orients the prior models to an outcome.
The scientific productivity of the five models depends on the vitality of their stepwise reconciliation of idea and experience. The models articulate a dialogue proceeding forwards and backwards as we apply what has been clarified by one model to expediting the tasks of another. Quality control and continuous monitoring is essential.
The organizer that integrates the five models is the MAP of the Variable. The MAP begins an idea about experience, an expectation, and a plan. The results from applying the models are incorporated in the MAP. The MAP coordinates and explains the idea by illustration, conceptually and experientially. The MAP portrays the status of results achieved, pictures what has been accomplished and identifies what remains to be done. Successful mapping brings ideas and experience together in a visual manifestation and synthesis of the dialectic process.
Benjamin D. Wright & Mark H. Stone, May 2003 (original paper 1996)
Isherwood, D. (1939) I am a camera. New York: Random House. Rasch, G. (1980). Probabilistic models for some intelligence and attainment tests. Chicago: The University of Chicago Press. (Original work published in 1960)
Wright, B. D. & Stone, M. H. (1979). Best test design. Chicago: MESA Press.
Wright, B. D. & Stone, M. H. (1996). Measurement essentials. Wilmington, DE: Wide Range, Inc.
Five Steps to Science: Observing, Scoring, Measuring, Analyzing, and Applying. B.D. Wright & M.H. Stone Rasch Measurement Transactions, 2003, 17:1, 912-913.
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt171j.htm
Website: www.rasch.org/rmt/contents.htm