Linking Tests with the Rasch Model

Equal-interval scaling provides the theoretical basis for linking tests, but linking tests is not a mechanical process. Careful thought is required. We need to consider five "C"s when undertaking the process of linking: Clean, Close,Consistent, Control and Constant.

Clean data is necessary. Item analysis quickly identifies improper scoring of items as well as those with unsatisfactory distractors. Rescore or drop such items. Off-target items provoke off-dimension responses: guessing, carelessness, response sets. Remove off-target responses. For typical multiple-choice tests, begin by discarding all test records with raw scores less than 1/3 of the maximum possible raw score. This is a conservative practice especially when the same test is given to a whole grade.

Close-together tests have the most consistent links. Close tests have item difficulty ranges of two logits or less and a mean item difficulty difference of about 3/10 of a logit. Greater separation between linked tests results in more items showing noticeable difficulty changes when plotted on a linking scattergram (see Figure). Linking tests that are too far apart is a common error. It has been shown experimentally that one logit difference in mean difficulty between two tests is too far apart for stable calibration of the two tests onto one scale.

Consistent adherence by the items to the variable measured is necessary. Tests items must lie along the same variable. Linking two tests that measure slightly different variables is a problem the severity of which increases with the difference in variables. Examine the item hierarchy. Does each item harmonize reasonably well into a common construct? Weed out discordant items that are inconsistent with the intended variable. It takes at least 40 items to get a reasonable confluence of item effects for measuring a variable.

Control over the linking items is necessary because the purpose of linking is to adjust all items in a test, not just those in the link. We need the linking items to be representative of the other items in each of the two tests that are being linked. We do not always have control over selection of the linking set of items, but when this is possible use items close to the center of difficulty for two tests rather than items of extreme difficulty. The practice of using a set of linking items that are the hardest items of one test and the easiest items on the other test is counter-productive, because off-target items are also the most unstable.

An item can be calibrated acceptably in each of two tests and yet act differently in those tests. Item interaction or specific learning of one group could be the cause. Drop these items from the linking set, and code them as different items in each test.

The linking constant must be sturdy. Rasch practitioners have often followed the rather crude practice of defining a set of items a priori, then averaging the calibrations of that set as they appeared in the two different tests, in order to establish a calibration difference that becomes the linking value to equate the two tests. This is too chancy. A link can almost always be "cleaned up" by excluding one or more of the items from the set of linking items, and this is often necessary to make the link defensible.

Replicating the linking value is helpful in deciding which linking items to omit. A third "linking" test, containing different linking items, is linked independently with each of the two tests originally linked. Hopefully this third testis close to the two original tests in difficulty. The algebraic sum of the links between the third test and the two original tests is compared with the direct link. This should confirm the accuracy of the direct link.

George Ingebo


Linking Tests with the Rasch Model. Ingebo G. … Rasch Measurement Transactions, 1997, 11:1 p. 549.

Please help with Standard Dataset 4: Andrich Rating Scale Model



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
April 26-30, 2017, Wed.-Sun. NCME, San Antonio, TX, www.ncme.org - April 29: Ben Wright book
April 27 - May 1, 2017, Thur.-Mon. AERA, San Antonio, TX, www.aera.net
April 29, 2017, Sat., 16:35 to 18:05. NCME Presidents Invitational Symposium: a new book commemorating Ben Wright's life and career, 16:35 to 18:05, San Antonio, TX, www.ncme.org
May 26 - June 23, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 30 - July 29, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src="http://www.rasch.org/events.txt"></script>

 

The URL of this page is www.rasch.org/rmt/rmt111k.htm

Website: www.rasch.org/rmt/contents.htm