This volume, published by Ablex (Norwood NJ, 1996) emanated from IOMW7 (Atlanta, 1993). Its Preface, by editors George Engelhard, Jr. and Mark Wilson, summarizes 22 authoritative chapters (35 authors). These chapters suggest ideas and approaches that stimulate our preparation for IOMW9.
Philosophical concerns are not central in Vol. 3 ,as they were in Vols. 1 & 2. It is practical concerns that dominate. One is to make the outcome of measurement more useful. Another is to make measurement more adaptable.
Chapter 7, "Judge Performance Reports: Media and Message" (J. Stahl & M. Lunz) addresses the pivotal issue in performance assessment: How can we monitor, diagnose, control and improve judge rating behavior? The first step is to treat raters as intelligent humans (rather than rating machines). The second is to give them feed- back they can understand and use to modify their own behavior. Pages 120-121 show quality control charts that raters act on, and W.E. Deming would be proud of. What is the third step?
Chapter 8, "Examining Changes in the Home Environment..." (J. Monsaas & G. Engelhard, Jr.) intrigues us with a variety of graphical devices for presenting results. The juxtaposition on p. 132 of a Table (of reliability coefficients) and a Figure (depicting time effects) convinces the reader that while a Figure is memorable, a Table is forgotten, even as it is read.
The struggle to make measurement more flexible is conducted on several fronts. Chapter 9, "... Mixed Coefficients Multinomial Logit" (R. Adams & M. Wilson) addresses "the problem of finding an appropriate model to suit the structure of the context." The challenge will be to communicate the results of this mathematical tour de force and make them useful. Van Duijn & Jansen (1995) overcome some awkward features of Poisson counts adroitly, but their solution involves gamma and Dirichlet distributions. They too are hampered by communication problems. Even a renowned expert on the Dirichlet distribution was unable to draw me a picture of it.
Chapter 15, "Item Component Equating" (R. Smith) builds on ideas explored by Gerhard Fischer. Measuring the parts can be more useful than measuring the whole. Practicality is the problem. Can designs be developed that allow parts to be embedded in different contexts and then used to link diverse wholes? This chapter evokes a new approach to test equating based on the components of test items, situations, tasks, judges(?). Who will put it to use?
Chapter 18, "Constructing Questionnaires.." (E. Roskam, N. Broers) presents one way in which items can be designed around component parts with the intention of measuring and learning from the parts rather than the items. This suggests that instead of trying to decompose existing items into parts, it will be more fruitful to construct items from parts. Chapter 18 also demonstrates how failure of parts to predict the difficulty of wholes stimulates further investigation into the nature of the variable.
At first glance, Chapter 22, "...Selection Methods for Optimal Test Design" (M. Berger, W. Veerkamp) appears anachronistic in an age of computer-adaptive testing and performance assessment. But the test designs on p. 440 have a marked similarity to judging plans. Paper-and-pencil tests could be expensive, but performance assessment is far more so. Optimal large-scale, minimum-cost judging plans are now demanded by education administrators. Can the techniques of this and other Chapters provide these plans?
Since Ben Wright first formulated Chapter 13, "Composition Analysis", I have been struck by how often we frail humans use the wrong approach to solve problems. As p. 250 illustrates, when a problem is hard for a group to solve, the group should resort to "pack" work - everyone trying to come up with a solution independently. Instead, in difficult times the theme is always "unity" - we walk in lock-step in a futile attempt to preserve what we have. On the other hand, when a problem is easy, then "team" work, consensus, is most effective. Instead, we say "That's easy! I don't need anyone else's advice." - and proceed to blunder. This chapter illustrates how, as we understand measurement, we understand ourselves.
Vol. 3 is an excellent example of pack work!
Van Duijn M.A.J., Jansen M.G.H. (1995) Modeling repeated count data: some extensions of the Rasch Poisson Counts model. JEBS 20:3, 241-258.
Linacre J M (1996) Objective Measurement: Theory into Practice, Volume 3. as a Provocation to Thought. Rasch Measurement Transactions 10:1 p.486.
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|Apr. 14-17, 2020, Tue.-Fri.||International Objective Measurement Workshop (IOMW), University of California, Berkeley, https://www.iomw.org/|
|May 22 - June 19, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 26 - July 24, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 1, 2020, Mon.-Wed.||Measurement at the Crossroads 2020, Milan, Italy , https://convegni.unicatt.it/mac-home|
|July - November, 2020||On-line course: An Introduction to Rasch Measurement Theory and RUMM2030Plus (Andrich & Marais), http://www.education.uwa.edu.au/ppl/courses|
|July 1 - July 3, 2020, Wed.-Fri.||International Measurement Confederation (IMEKO) Joint Symposium, Warsaw, Poland, http://www.imeko-warsaw-2020.org/|
|Aug. 7 - Sept. 4, 2020, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 9 - Nov. 6, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 25 - July 23, 2021, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
The URL of this page is www.rasch.org/rmt/rmt101q.htm