Differential Ability Scales

Fifteen years ago, Colin Elliott was an early user of Rasch methods in developing The British Ability Scales (BAS), an individually- administered cognitive battery for children. The BAS, described by Wright and Stone in the Ninth Mental Measurements Yearbook (1985), reports the standard error of the ability estimate at each raw score level, and offers a choice of several overlapping item sets for each sub-test.

The U.S. BAS revision, Differential Ability Scales (DAS), is published by The Psychological Corporation. This new test goes considerably further than the BAS in applying Rasch techniques. Colin Elliott, and I (Project Director) developed novel methods to deal with three traditional needs of individual ability testing:

* Testing must be kept brief because tests are administered by busy professionals.
* Tests must be accurate over a wide range since many children referred for testing are at the low or high end of the ability spectrum.
* Test accuracy must be communicated to typical users in terms of familiar reliability coefficients.

The DAS provides 20 sub-tests including nonverbal reasoning, spatial ability, verbal ability, short-term memory, and speed of information processing. We used 4,500 children, 2 through 17 years old, to calibrate each DAS sub-test with the Rasch program MSTEPS. MSTEPS's ability to handle unadministered (missing) item data was essential because this enabled one-step (concurrent) vertical equating of overlapping item sets (Schulz 1988 RM 1:2). We compared this method with a pair-wise equating of within-level calibrations and found that the results were statistically equivalent.

We also used Rasch methods for our bias analyses. Sub-samples stratified by race/ethnicity, sex, and region were calibrated independently and compared. Items with improbable between-sample variations were flagged for study. Results were gratifyingly interpretable; for example, the picture-vocabulary item "cactus" was biased against children from the Northeast.

The most useful application of Rasch methods was to enable adaptive testing. Each DAS sub-test is divided into several overlapping item sets. The examiner administers an initial set based on the examinee's age and expected ability. The examiner decides whether to stop or continue depending on the examinee's performance on the initial set. Typically, when the examinee passes at least three items and also fails at least three items in a set, testing stops. Otherwise an additional set of easier or more difficult items is administered, and another stop/continue decision is made. At the end, the examiner can convert the total raw score on all items from all sets administered to a Rasch ability estimate.

Most test users expect reliability coefficients as the indicators of precision. But since different examinees take different sets of items, the usual internal-consistency estimation methods are impossible to apply. In addition, test development involved using what was learned during standardization to improve item sequences and item-selection rules, so that reliabilities calculated from standardization data would not describe the accuracy of the final version.

Our solution was to simulate item selection using the item difficulties and person abilities estimated from the standardization data. The item-set selection rules were applied to each ability level in turn. The probability of success on each administered item was determined, and consequently the administration probability of each item-set. Since the standard error corresponding to each score on each item set is known, weighting the standard errors by their probability of occurrence yields an expected standard error for each ability level. Next the distribution of ability levels within each age group is obtained from the standardization data. The reliability coefficient for each age group is one minus the average expected error variance divided by the observed ability variance. These coefficients were compared with coefficient Alpha in two sets of (complete) real data, and agreed closely. Also, in six simulated data sets of 2,000 "cases", the "adaptive" reliability differed from the conventional reliability by no more than .01.

The approach provides an "accuracy" for the DAS at any given ability level within any age group. This leads to recommendations for using sub-tests "out of level" when they increase accuracy for children of extreme ability within an age group. The simulation technique was a powerful tool which allowed us to experiment with different item sets and different adaptive-testing rules and observe their effects on accuracy and efficiency.



Differential Ability Scales, M Daniel … Rasch Measurement Transactions, 1990, 4:2 p. 108




Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou El modelo métrico de Rasch: Fundamentación, implementación e interpretación de la medida en ciencias sociales (Spanish Edition), Manuel González-Montesinos M.
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen
Rasch Books and Publications: Winsteps and Facets
Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Rasch Models for Solving Measurement Problems (Facets), George Engelhard, Jr. & Jue Wang Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt42g.htm

Website: www.rasch.org/rmt/contents.htm