Local Achievement Testing

Building Local Capacity for High Quality Achievement Testing: Functional Level Tests

Level Tests are short tests that increase in difficulty from one to another. This makes it possible to give each student a test at the current performance level and also to measure growth from year to year.

Level tests are curriculum referenced to increase the usefulness of the test for improving instruction. Northwest Evaluation Association (NWEA) develops a district "blueprint" based on local outcomes. The tests include a balance of goals of increasing difficulty and guarantee enough coverage to report reliable goal information for each student, so teachers can help students on an individual basis. The questions are selected from thousands of questions, which were written by trained teacher teams to cover a wide range of goals in reading, general mathematics and language usage. These questions were field tested with tens of thousands of students to identify and eliminate items not of the highest quality and fairness.

Level tests are calibrated to the curriculum. Calibration makes it possible to calculate Rasch unit (Rasch unIT, RIT = 0.1 logit) measures that relate students' achievement to the curriculum. RIT measures are superior to percentile, NCE and grade equivalent scores because they are tied directly to the curriculum, rather than being based on the performance of specified groups of students. Since test information is anchored in the curriculum, it is possible to track student progress accurately from year to year, to change the tests to keep pace with the curriculum, and to maintain consistent norms. The calibration process required 10 years of research, development, and field testing in classrooms.

Level tests are focused at specific achievement levels. This ensures that every student has the chance to succeed and maintain a positive attitude towards testing. Focusing the tests also makes it possible to use shorter tests and less class time and still maintain high reliability and validity.

All tests in a subject are built to the same basic blueprint. The difficulty range of the items in each test overlaps the difficulty range of the items on the tests immediately above and below it in the series so that any student may be assigned a test where the mid-range of item difficulty approaches his or her current achievement . The range of difficulty for each test within the series is relatively narrow to avoid frustration (caused by items too difficult for the student) or boredom (items too easy). This means there are few "wasted" items, because a student can productively attempt all items in a given test.

The appropriate test level for a student is determined by placement tests or guidelines. After this level has been obtained once, the testing program automatically assigns an appropriate level to the student whenever answer sheets for later tests are pre-printed.

Level tests provide several measures to support instructional and placement decisions. (An example Parent report is shown.) The RIT measure relates directly to the curriculum scale in each subject. It is an equal-interval linear measure, like feet and inches, so RITs can be added together to calculate accurate class or school averages. RITs range from 160 to 240. [One RIT = 0.1 logits.] Students typically start at the 160-170 level in the third grade and progress to the 230-240 level by high school [growing 5-6 RITS per grade]. (Of course, many students start at a higher RIT level and many low-achieving students never reach the top level.) A RIT measure of 200 represents typical performance of students in the Fall of grade 5.

RIT measures make it possible to follow a student's educational growth from year to year. This is ideal for "non-graded" instructional programs.

The POP [population] index relates the RIT to the average for all students in the same grade. The POP index is the measure score standardized within grade and reported so that 50 corresponds to the grade average, 40 (one S.D. down) or below indicates the student's performance is low for that grade, and 60 (one S.D. up) or above indicates the student's performance is high for that grade. These indices can be converted to percentile scores.

Level test information is useful at student and program levels. Our purpose is to help students learn, to help them grow. RIT scales show us whether a student, or group of students has grown. While we expect student RIT measures to increase across time, we expect student POP indices to remain the same. Since the POP index represents performance relative to the other students, a change in POP score indicates that the student is growing either faster or more slowly than other students in the same grade.

Condensed (with permission) from NWEA Assessment Alternatives Newsletter, December 1992. Allan Olson, Executive Director; Susan Smoyer, Project Manager. (503) 624-1951, FAX (503) 624-9132





Local Achievement Testing, A Olson & S Smoyer … Rasch Measurement Transactions, 1993, 6:4 p. 258-9




Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou El modelo métrico de Rasch: Fundamentación, implementación e interpretación de la medida en ciencias sociales (Spanish Edition), Manuel González-Montesinos M.
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen
Rasch Books and Publications: Winsteps and Facets
Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Rasch Models for Solving Measurement Problems (Facets), George Engelhard, Jr. & Jue Wang Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt64i.htm

Website: www.rasch.org/rmt/contents.htm