The Play of Metaphor in the Theory and Practice of Educational Measurement (4.12)
William P. Fisher, Jr. Louisiana SU Medical Center
Metaphor is increasingly recognized for the crucial role it plays in science. Even counting requires that differences between similar, but unique, entities be overlooked, meaning that each new unit added is interpreted figuratively, not literally, as the same as every other. Metaphor is scientific insofar as it is mathematical, in the Academy's root metaphysical sense of the "communicable" as something that can be taught and learned, and which therefore has a meaning that remains relatively stable across speakers, listeners, readers, and writers (Gadamer, 1980, 1989). Educational measurement applications typically interpret counts of correct answers or of performance assessment categories literally, as though every unit counted was identical with every other, without checking for consistent communication or mathematical invariance. This symposium shows how measurement models based on Rasch's Separability Theorem (Rasch 1977, 1960), in contrast, treat counts of correct answers or of rating scale steps figuratively, requiring that data be evaluated for consistency and invariance before provisionally basing inferences on them.
The first paper draws out the crucial, but rarely examined, connection between metaphor and mathematical structure, drawing out the analogy with parameter separation in Rasch measurement. The second paper illuminates the integration of metaphor and mathematical structure in a developmental sequence of increasing hierarchical complexity via application of a Rasch model. The third paper presents theoretical criteria for recognizing the play of metaphor in educational measurement. The fourth paper shows how the metaphorical thread drawn from the sequence of developmental stages can itself be found to exhibit mathematical structure. Finally, the discussant remarks on the implications of this work for setting data quality standards and variable-specific, scale- and sample-free universal metrics.
Using the Rasch Model to Assess the Implications of Exemplary School Practices: A Pilot Study of Florida's Middle Schools (27.56)
Michele Gregoire, Edward W. Wolfe, University of Florida
We designed and piloted a questionnaire that measures the "exemplary middle school construct" (George & Alexander, 1993) using Rasch measurement theory. Assistant principals (N=26) participated in telephone interviews by responding to a 28-item questionnaire that contains items reflecting school scheduling practices, team teaching, teacher planning, school philosophy, tracking, and other exemplary middle school practices. Our results show that schools with block scheduling exhibited more exemplary middle school practices than did those with traditional class scheduling. In addition, 58% of the sample exhibited exemplary middle school practices based on the most distinguishing questionnaire items.
Attention Deficit Hyperactivity Disorder: Scaling and Standard Setting using Rasch Measurement (27.56)
Everett V. Smith Jr., Rita T. Drenga, University of Illinois at Chicago; Kimberly A. Lawless, University of Utah
This paper explores the dimensionality of responses to the Adult Behavior Checklist - Revised, a screening assessment for Attention Deficit Hyperactivity Disorder (ADHD) in college students. A series of Rasch rating scale analyses support the interpretation of Inattention and Impulsivity/Hyperactivity variables. Principal component analyses of residuals identified the existence of secondary variables that may have clinical implications for the treatment of ADHD. A standard-setting process was employed to establish a cut-score for significant symptomatology. Judges generally displayed less variability than expected by the model. The derived standard was found to be more stringent than previously suggested cut-scores.
Influence of Gender and Time Facets on Ratings of Extended Performance Tasks (27.56)
Cynthia K. Louden, Thomas E. Brooks Harcourt, Brace Educational Measurement; John Tanner, Delaware Department of Education
Rasch partial credit scaling combined with Facets analysis were used in this study to investigate the effects of rater gender and scoring sequence on extended performance task scores in the Spring 1998 Delaware State Assessment Program. No gender differences existed in mathematics ratings, but women were slightly more lenient in rating language arts tasks. Scores did not change according to scoring sequence. Raters became faster and slightly more consistent as they completed more papers.
Forum: Explaining Latent Trait Models to Non-Specialists (32.23)
The first part of the SIG Business meeting will be an interactive forum. Five facilitators will present, discuss, or describe different professional situations that illustrate some of the trials, tribulations, challenges, and joys of explaining latent trait theory to non-specialists. These will spark your interaction and participation. Think about how you can participate with us in a really interesting dialogue and sharing of experiences. If you would like to share a question, problem, or experience with us, please contact Larry Ludlow.
Examining Construct Validity of Scores/Measures using Classical and Many-facet Rasch Analyses (53.50)
Madhabi Banerji, University of South Florida
Classical and three-facet Rasch analyses were combined to make decisions on item and scale quality, rater consistency, and utility of scores and measures from a developmental mathematics assessment for 8- 12 year olds. Field-test data (n=280) suggested that mean proficiency scores based on nine tasks generally increased with age. Student ability measures, adjusted for task difficulty and rater severity, showed a reasonable range. The calibrated task order was found to coincide with the original difficulty order of tasks, but gaps found on the item map indicated a need for new tasks. Misfit values for raters suggested a need for further rater training.
Rasch vs. Two- and Three-Parameter Logistic Models From the Perspective of Conjoint Measurement Theory (53.50)
George Karabatsos, Louisiana SU Medical Center
To construct quantitative (interval or ratio) measurement from ordinal observations, data must approximate the structural requirements of additive conjoint measurement (ACM). Rasch models are stochastic analogs of ACM because they specify uncrossing item characteristic curves (ICCs) with equal slopes. However, the two-parameter (2PL, 2-PL) and three-parameter logistic models (3PL, 3-PL) allow ICCs to cross, therefore distorting conjoint additivity. Yet they both are offered as useful alternatives to Rasch models, because they can better fit problematic data. Using data simulations, this study determines the frequency with which the three models could support interval-scale measurement by producing conjointly additive matrices.
An Examination of Person Misfit in Five Affective Measures (53.50)
Erica M. Johnson, American College Testing
The purpose of this study was to explore the detection and classification of misfitting response patterns using Rasch person fit statistics and a proposed taxonomy of person misfit. Five affective measures were examined, and misfitting patterns were classified into seven taxonomy categories: inattentive, overattentive, early/late blooming, misleading, eccentric, idiosyncratic, and puzzled. Many misfitting patterns were classifiable, and classification varied across the five data sets. The results suggest that systematic classification of misfit is feasible and as a result, unusual data can be better understood.
Appropriateness of Asymptotic Standard Errors for Rasch Item Difficulty Estimates (53.50)
Richard M. Smith, Rehabilitation Foundation Inc.
Most calibration programs designed for the family of Rasch psychometric models report the asymptotic standard errors for person and item parameter estimates resulting from the calibration process. Although these estimates are theoretically correct, they may be influenced by any number of factors, such as restrictions due to the loss of degrees of freedom in the estimation process, offset between the mean person and item measures, and the presence of misfit in the data. Previous work indicated that asymptotic person standard errors were often inappropriate due to the presence of these factors. This study reports on the effect of these factors on the observed standard deviation of estimated item measures in simulated data and compares these results to the modeled asymptotic standard errors reported by the estimation program. The results indicate that the asymptotic standard errors are very close estimates of the observed standard deviation of the estimated measures and are not influenced by the factors studied.
Multidimensional Analysis of a Physics Achievement Test (53.50)
Claus H. Carstensen, Gunnar Friege, Gunter Lind & Juergen Rost, IPN - Institute for Science Education at the University of Kiel, Germany
A problem solving task in the domain of physics is analyzed. Its construction was guided by a four-dimensional design: the use of two different solution strategies was forced in two different content areas. The analyses were made using the Multidimensional Item Component Rasch Model (MULTIRA), which is a generalization of the One Parameter Logistic Model (OPLM) to several latent traits. A two-dimensional Rasch-Model is found to explain the data equally well as a one-dimensional model with discrimination parameters (OPLM) does, which maybe due to the close relation between the dimensions.
AERA, Montreal 1999, Rasch Abstracts Rasch Measurement Transactions, 1999, 12:4 p.
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|July 30 - Nov., 2018||Online Introduction to Classical and Rasch Measurement Theories (D.Andrich), University of Western Australia, Perth, Australia, http://www.education.uwa.edu.au/ppl/courses|
|Oct. 12 - Nov. 9, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 25 - Feb. 22, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 28, 2019||On-line course: Understanding Rasch Measurement Theory (ACER), https://www.acer.org/professional-learning/postgraduate/Rasch|
|April 4 - 8, 2019, Thur.-Mon.||NCME annual meeting, Toronto, Canada.https://ncme.connectedcommunity.org/meetings/annual|
|April 5 - 9, 2019, Fri.-Tue.||AERA annual meeting, Toronto, Canada.www.aera.net/Events-Meetings/Annual-Meeting|
|May 24 - June 21, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 28 - July 26, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 9 - Sept. 6, 2019, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 11 - Nov. 8, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 24 - Feb. 21, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|May 22 - June 19, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 26 - July 24, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 7 - Sept. 4, 2020, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 9 - Nov. 6, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 25 - July 23, 2021, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
The URL of this page is www.rasch.org/rmt/rmt124c.htm