"Seeking a Balance Between the Statistical and Scientific Elements in Psychometrics", July 2012, Lincoln, NE
In his recent Presidential Address to the International Meeting of the Psychometric Society, Mark Wilson contrasted statistical and scientific themes in psychometrics in terms of the history of his own work, with larger goal of identifying scientific aspects in psychometrics that would distinguish it from statistical modeling. A paper based on this presentation will be forthcoming in Psychometrika in early 2013.
Early in his career, Mark developed the Saltus model of discontinuous development, which was and remains a highly innovative and effective guide to measuring cognitive growth. The work was done in isolation, however; it also is fairly complex, and it was not informed by input from anyone engaged with the substantive practicalities of research in cognitive development. Thus, Mark pointed out that his publications in this area are rarely cited and his model has had little or no impact.
This situation is quite different from more recent work Mark has been doing, in which engagement with substantive content experts is an essential ingredient. Now, the models and construct theories implicitly or explicitly included in curricular outcomes assessments are articulated, developed, and applied in collaboration with experts in the substantive area. As Mark pointed out, there is no dumbing down of model complexity necessary in this context, as many projects naturally entail multiple constructs manifest at multiple levels of organization and/or with multiple facets, and which are assessed via many different types of items or performance rating schemes, some of which may involve testlets or item bundles and their local dependencies. Mark illustrated one of these new collaborations as an example. A middle and high school statistics and data modeling curriculum developed at Vanderbilt University involves a number of separate but interdependent strands. To understand just what was intended for the assessment and to formulate a plan adequate to the needs of both instruction and accountability:
A. detailed theoretical maps of each construct's levels and sublevels were laid out;
B. items were designed to express each level of each construct, and with an eye to the interrelations between those constructs;
C. the scoring of the items, distractors and mistaken responses was set up to inform individualized instructional applications;
D. the measurement model appropriate to the overall assessment system is then applied to a pilot data set; and
E. the new information on the system performance is then used to revise the construct map(s), the item design, the outcome space, and the model for ongoing applications. Though Wilson did not mention it, readers of his 2005 Constructing Measures text will recognize here the four phases of that book's systematic assessment methodology. And completing a round or two through the process certainly sets the stage for iterating through it once again, with the intention of taking the construct sublevels to a new level of specificity capable of affording predictive control over item design and scoring, as is suggested in his 2004 book with DeBoeck on explanatory models.
In conclusion, Wilson raised again the question of just how psychometrics is to be more than a specialized branch of statistics if it does not capitalize on the practical opportunities for measurement it has created for itself in education, psychology, and other fields. There seems to be great potential for integrating qualitative substantive theory and practice with quantitative methods and modeling. Perhaps a recognizable new paradigm is now in the process of forming.
William P. Fisher, Jr.
Mark Wilson's Psychometric Society Presidential Address . William P. Fisher, Jr. Rasch Measurement Transactions, 2012, 26:2 p. 1364
|Rasch Measurement Transactions (free, online)
|Rasch Measurement research papers (free, online)
|Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch
|Applying the Rasch Model 3rd. Ed., Bond & Fox
|Best Test Design, Wright & Stone
|Rating Scale Analysis, Wright & Masters
|Introduction to Rasch Measurement, E. Smith & R. Smith
|Introduction to Many-Facet Rasch Measurement, Thomas Eckes
|Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.
|Statistical Analyses for Language Testers, Rita Green
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar
|Journal of Applied Measurement
|Rasch models for measurement, David Andrich
|Constructing Measures, Mark Wilson
|Rasch Analysis in the Human Sciences, Boone, Stave, Yale
|Análisis de Rasch para todos, Agustín Tristán
|Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez
|Rasch Measurement Forum to discuss any Rasch-related topic
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events
|Oct. 6 - Nov. 3, 2023, Fri.-Fri.
|On-line workshop: Rasch Measurement - Core Topics (E. Smith, Facets), www.statistics.com
|Oct. 12, 2023, Thursday 5 to 7 pm Colombian time
|On-line workshop: Deconstruyendo el concepto de validez y Discusiones sobre estimaciones de confiabilidad SICAPSI (J. Escobar, C.Pardo) www.colpsic.org.co
|June 12 - 14, 2024, Wed.-Fri.
|1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
|Aug. 9 - Sept. 6, 2024, Fri.-Fri.
|On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
The URL of this page is www.rasch.org/rmt/rmt262c.htm,