Local Achievement Testing

Building Local Capacity for High Quality Achievement Testing: Functional Level Tests

Level Tests are short tests that increase in difficulty from one to another. This makes it possible to give each student a test at the current performance level and also to measure growth from year to year.

Level tests are curriculum referenced to increase the usefulness of the test for improving instruction. Northwest Evaluation Association (NWEA) develops a district "blueprint" based on local outcomes. The tests include a balance of goals of increasing difficulty and guarantee enough coverage to report reliable goal information for each student, so teachers can help students on an individual basis. The questions are selected from thousands of questions, which were written by trained teacher teams to cover a wide range of goals in reading, general mathematics and language usage. These questions were field tested with tens of thousands of students to identify and eliminate items not of the highest quality and fairness.

Level tests are calibrated to the curriculum. Calibration makes it possible to calculate Rasch unit (Rasch unIT, RIT = 0.1 logit) measures that relate students' achievement to the curriculum. RIT measures are superior to percentile, NCE and grade equivalent scores because they are tied directly to the curriculum, rather than being based on the performance of specified groups of students. Since test information is anchored in the curriculum, it is possible to track student progress accurately from year to year, to change the tests to keep pace with the curriculum, and to maintain consistent norms. The calibration process required 10 years of research, development, and field testing in classrooms.

Level tests are focused at specific achievement levels. This ensures that every student has the chance to succeed and maintain a positive attitude towards testing. Focusing the tests also makes it possible to use shorter tests and less class time and still maintain high reliability and validity.

All tests in a subject are built to the same basic blueprint. The difficulty range of the items in each test overlaps the difficulty range of the items on the tests immediately above and below it in the series so that any student may be assigned a test where the mid-range of item difficulty approaches his or her current achievement . The range of difficulty for each test within the series is relatively narrow to avoid frustration (caused by items too difficult for the student) or boredom (items too easy). This means there are few "wasted" items, because a student can productively attempt all items in a given test.

The appropriate test level for a student is determined by placement tests or guidelines. After this level has been obtained once, the testing program automatically assigns an appropriate level to the student whenever answer sheets for later tests are pre-printed.

Level tests provide several measures to support instructional and placement decisions. (An example Parent report is shown.) The RIT measure relates directly to the curriculum scale in each subject. It is an equal-interval linear measure, like feet and inches, so RITs can be added together to calculate accurate class or school averages. RITs range from 160 to 240. [One RIT = 0.1 logits.] Students typically start at the 160-170 level in the third grade and progress to the 230-240 level by high school [growing 5-6 RITS per grade]. (Of course, many students start at a higher RIT level and many low-achieving students never reach the top level.) A RIT measure of 200 represents typical performance of students in the Fall of grade 5.

RIT measures make it possible to follow a student's educational growth from year to year. This is ideal for "non-graded" instructional programs.

The POP [population] index relates the RIT to the average for all students in the same grade. The POP index is the measure score standardized within grade and reported so that 50 corresponds to the grade average, 40 (one S.D. down) or below indicates the student's performance is low for that grade, and 60 (one S.D. up) or above indicates the student's performance is high for that grade. These indices can be converted to percentile scores.

Level test information is useful at student and program levels. Our purpose is to help students learn, to help them grow. RIT scales show us whether a student, or group of students has grown. While we expect student RIT measures to increase across time, we expect student POP indices to remain the same. Since the POP index represents performance relative to the other students, a change in POP score indicates that the student is growing either faster or more slowly than other students in the same grade.

Condensed (with permission) from NWEA Assessment Alternatives Newsletter, December 1992. Allan Olson, Executive Director; Susan Smoyer, Project Manager. (503) 624-1951, FAX (503) 624-9132





Local Achievement Testing, A Olson & S Smoyer … Rasch Measurement Transactions, 1993, 6:4 p. 258-9


Please help with Standard Dataset 4: Andrich Rating Scale Model



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
May 26 - June 23, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 30 - July 29, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY, www.aera.net
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src="http://www.rasch.org/events.txt"></script>

 

The URL of this page is www.rasch.org/rmt/rmt64i.htm

Website: www.rasch.org/rmt/contents.htm