Differential Ability Scales

Fifteen years ago, Colin Elliott was an early user of Rasch methods in developing The British Ability Scales (BAS), an individually- administered cognitive battery for children. The BAS, described by Wright and Stone in the Ninth Mental Measurements Yearbook (1985), reports the standard error of the ability estimate at each raw score level, and offers a choice of several overlapping item sets for each sub-test.

The U.S. BAS revision, Differential Ability Scales (DAS), is published by The Psychological Corporation. This new test goes considerably further than the BAS in applying Rasch techniques. Colin Elliott, and I (Project Director) developed novel methods to deal with three traditional needs of individual ability testing:

* Testing must be kept brief because tests are administered by busy professionals.
* Tests must be accurate over a wide range since many children referred for testing are at the low or high end of the ability spectrum.
* Test accuracy must be communicated to typical users in terms of familiar reliability coefficients.

The DAS provides 20 sub-tests including nonverbal reasoning, spatial ability, verbal ability, short-term memory, and speed of information processing. We used 4,500 children, 2 through 17 years old, to calibrate each DAS sub-test with the Rasch program MSTEPS. MSTEPS's ability to handle unadministered (missing) item data was essential because this enabled one-step (concurrent) vertical equating of overlapping item sets (Schulz 1988 RM 1:2). We compared this method with a pair-wise equating of within-level calibrations and found that the results were statistically equivalent.

We also used Rasch methods for our bias analyses. Sub-samples stratified by race/ethnicity, sex, and region were calibrated independently and compared. Items with improbable between-sample variations were flagged for study. Results were gratifyingly interpretable; for example, the picture-vocabulary item "cactus" was biased against children from the Northeast.

The most useful application of Rasch methods was to enable adaptive testing. Each DAS sub-test is divided into several overlapping item sets. The examiner administers an initial set based on the examinee's age and expected ability. The examiner decides whether to stop or continue depending on the examinee's performance on the initial set. Typically, when the examinee passes at least three items and also fails at least three items in a set, testing stops. Otherwise an additional set of easier or more difficult items is administered, and another stop/continue decision is made. At the end, the examiner can convert the total raw score on all items from all sets administered to a Rasch ability estimate.

Most test users expect reliability coefficients as the indicators of precision. But since different examinees take different sets of items, the usual internal-consistency estimation methods are impossible to apply. In addition, test development involved using what was learned during standardization to improve item sequences and item-selection rules, so that reliabilities calculated from standardization data would not describe the accuracy of the final version.

Our solution was to simulate item selection using the item difficulties and person abilities estimated from the standardization data. The item-set selection rules were applied to each ability level in turn. The probability of success on each administered item was determined, and consequently the administration probability of each item-set. Since the standard error corresponding to each score on each item set is known, weighting the standard errors by their probability of occurrence yields an expected standard error for each ability level. Next the distribution of ability levels within each age group is obtained from the standardization data. The reliability coefficient for each age group is one minus the average expected error variance divided by the observed ability variance. These coefficients were compared with coefficient Alpha in two sets of (complete) real data, and agreed closely. Also, in six simulated data sets of 2,000 "cases", the "adaptive" reliability differed from the conventional reliability by no more than .01.

The approach provides an "accuracy" for the DAS at any given ability level within any age group. This leads to recommendations for using sub-tests "out of level" when they increase accuracy for children of extreme ability within an age group. The simulation technique was a powerful tool which allowed us to experiment with different item sets and different adaptive-testing rules and observe their effects on accuracy and efficiency.



Differential Ability Scales, M Daniel … Rasch Measurement Transactions, 1990, 4:2 p. 108




Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
Sept. 27-29, 2017, Wed.-Fri. In-person workshop: Introductory Rasch Analysis using RUMM2030, Leeds, UK (M. Horton), Announcement
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Oct. 25-27, 2017, Wed.-Fri. In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement
Dec. 6-8, 2017, Wed.-Fri. In-person workshop: Introductory Rasch Analysis using RUMM2030, Leeds, UK (M. Horton), Announcement
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY, www.aera.net
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt42g.htm

Website: www.rasch.org/rmt/contents.htm