Fifteen years ago, Colin Elliott was an early user of Rasch methods in developing The British Ability Scales (BAS), an individually- administered cognitive battery for children. The BAS, described by Wright and Stone in the Ninth Mental Measurements Yearbook (1985), reports the standard error of the ability estimate at each raw score level, and offers a choice of several overlapping item sets for each sub-test.
The U.S. BAS revision, Differential Ability Scales (DAS), is published by The Psychological Corporation. This new test goes considerably further than the BAS in applying Rasch techniques. Colin Elliott, and I (Project Director) developed novel methods to deal with three traditional needs of individual ability testing:
* Testing must be kept brief because tests are administered by busy
* Tests must be accurate over a wide range since many children referred for testing are at the low or high end of the ability spectrum.
* Test accuracy must be communicated to typical users in terms of familiar reliability coefficients.
The DAS provides 20 sub-tests including nonverbal reasoning, spatial ability, verbal ability, short-term memory, and speed of information processing. We used 4,500 children, 2 through 17 years old, to calibrate each DAS sub-test with the Rasch program MSTEPS. MSTEPS's ability to handle unadministered (missing) item data was essential because this enabled one-step (concurrent) vertical equating of overlapping item sets (Schulz 1988 RM 1:2). We compared this method with a pair-wise equating of within-level calibrations and found that the results were statistically equivalent.
We also used Rasch methods for our bias analyses. Sub-samples stratified by race/ethnicity, sex, and region were calibrated independently and compared. Items with improbable between-sample variations were flagged for study. Results were gratifyingly interpretable; for example, the picture-vocabulary item "cactus" was biased against children from the Northeast.
The most useful application of Rasch methods was to enable adaptive testing. Each DAS sub-test is divided into several overlapping item sets. The examiner administers an initial set based on the examinee's age and expected ability. The examiner decides whether to stop or continue depending on the examinee's performance on the initial set. Typically, when the examinee passes at least three items and also fails at least three items in a set, testing stops. Otherwise an additional set of easier or more difficult items is administered, and another stop/continue decision is made. At the end, the examiner can convert the total raw score on all items from all sets administered to a Rasch ability estimate.
Most test users expect reliability coefficients as the indicators of precision. But since different examinees take different sets of items, the usual internal-consistency estimation methods are impossible to apply. In addition, test development involved using what was learned during standardization to improve item sequences and item-selection rules, so that reliabilities calculated from standardization data would not describe the accuracy of the final version.
Our solution was to simulate item selection using the item difficulties and person abilities estimated from the standardization data. The item-set selection rules were applied to each ability level in turn. The probability of success on each administered item was determined, and consequently the administration probability of each item-set. Since the standard error corresponding to each score on each item set is known, weighting the standard errors by their probability of occurrence yields an expected standard error for each ability level. Next the distribution of ability levels within each age group is obtained from the standardization data. The reliability coefficient for each age group is one minus the average expected error variance divided by the observed ability variance. These coefficients were compared with coefficient Alpha in two sets of (complete) real data, and agreed closely. Also, in six simulated data sets of 2,000 "cases", the "adaptive" reliability differed from the conventional reliability by no more than .01.
The approach provides an "accuracy" for the DAS at any given ability level within any age group. This leads to recommendations for using sub-tests "out of level" when they increase accuracy for children of extreme ability within an age group. The simulation technique was a powerful tool which allowed us to experiment with different item sets and different adaptive-testing rules and observe their effects on accuracy and efficiency.
Differential Ability Scales, M Daniel Rasch Measurement Transactions, 1990, 4:2 p. 108
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|Aug. 14 - 16, 2019. Wed.-Fri.||An Introduction to Rasch Measurement: Theory and Applications (workshop led by Richard M. Smith) https://www.hkr.se/pmhealth2019rs|
|August 25-30, 2019, Sun.-Fri.||Pacific Rim Objective Measurement Society (PROMS) 2019, Surabaya, Indonesia https://proms.promsociety.org/2019/|
|Oct. 11 - Nov. 8, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Nov. 3 - Nov. 4, 2019, Sun.-Mon.||International Outcome Measurement Conference, Chicago, IL,http://jampress.org/iomc2019.htm|
|Jan. 24 - Feb. 21, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|May 22 - June 19, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 26 - July 24, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 7 - Sept. 4, 2020, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 9 - Nov. 6, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 25 - July 23, 2021, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
The URL of this page is www.rasch.org/rmt/rmt42g.htm