Equal-interval scaling provides the theoretical basis for linking tests, but linking tests is not a mechanical process. Careful thought is required. We need to consider five "C"s when undertaking the process of linking: Clean, Close,Consistent, Control and Constant.
Clean data is necessary. Item analysis quickly identifies improper scoring of items as well as those with unsatisfactory distractors. Rescore or drop such items. Off-target items provoke off-dimension responses: guessing, carelessness, response sets. Remove off-target responses. For typical multiple-choice tests, begin by discarding all test records with raw scores less than 1/3 of the maximum possible raw score. This is a conservative practice especially when the same test is given to a whole grade.
Close-together tests have the most consistent links. Close tests have item difficulty ranges of two logits or less and a mean item difficulty difference of about 3/10 of a logit. Greater separation between linked tests results in more items showing noticeable difficulty changes when plotted on a linking scattergram (see Figure). Linking tests that are too far apart is a common error. It has been shown experimentally that one logit difference in mean difficulty between two tests is too far apart for stable calibration of the two tests onto one scale.
Consistent adherence by the items to the variable measured is necessary. Tests items must lie along the same variable. Linking two tests that measure slightly different variables is a problem the severity of which increases with the difference in variables. Examine the item hierarchy. Does each item harmonize reasonably well into a common construct? Weed out discordant items that are inconsistent with the intended variable. It takes at least 40 items to get a reasonable confluence of item effects for measuring a variable.
Control over the linking items is necessary because the purpose of linking is to adjust all items in a test, not just those in the link. We need the linking items to be representative of the other items in each of the two tests that are being linked. We do not always have control over selection of the linking set of items, but when this is possible use items close to the center of difficulty for two tests rather than items of extreme difficulty. The practice of using a set of linking items that are the hardest items of one test and the easiest items on the other test is counter-productive, because off-target items are also the most unstable.
An item can be calibrated acceptably in each of two tests and yet act differently in those tests. Item interaction or specific learning of one group could be the cause. Drop these items from the linking set, and code them as different items in each test.
The linking constant must be sturdy. Rasch practitioners have often followed the rather crude practice of defining a set of items a priori, then averaging the calibrations of that set as they appeared in the two different tests, in order to establish a calibration difference that becomes the linking value to equate the two tests. This is too chancy. A link can almost always be "cleaned up" by excluding one or more of the items from the set of linking items, and this is often necessary to make the link defensible.
Replicating the linking value is helpful in deciding which linking items to omit. A third "linking" test, containing different linking items, is linked independently with each of the two tests originally linked. Hopefully this third testis close to the two original tests in difficulty. The algebraic sum of the links between the third test and the two original tests is compared with the direct link. This should confirm the accuracy of the direct link.
Linking Tests with the Rasch Model. Ingebo G. Rasch Measurement Transactions, 1997, 11:1 p. 549.
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|March 21, 2019, Thur.||13th annual meeting of the UK Rasch user group, Cambridge, UK, http://www.cambridgeassessment.org.uk/events/uk-rasch-user-group-2019|
|April 4 - 8, 2019, Thur.-Mon.||NCME annual meeting, Toronto, Canada,https://ncme.connectedcommunity.org/meetings/annual|
|April 5 - 9, 2019, Fri.-Tue.||AERA annual meeting, Toronto, Canada,www.aera.net/Events-Meetings/Annual-Meeting|
|April 12, 2019, Fri.||On-line course: Understanding Rasch Measurement Theory - Master's Level (G. Masters), https://www.acer.org/au/professional-learning/postgraduate/rasch|
|May 24 - June 21, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|May 22 - 30, 2019, Wed.-Thu.||Measuring and scale construction (with the Rasch Model), University of Manchester, England, https://www.cmist.manchester.ac.uk/study/short/intermediate/measurement-with-the-rasch-model/|
|June 4 - 7, 2019, Tue.-Fri.||In-Person Italian Rasch Analysis Workshop based on RUMM (Fabio La Porta and Serena Caselli; entirely in Italian). Prof David Andrich from Western Australia University will be hosted by the workshop. For enquiries and registration email to email@example.com|
|June 17-19, 2019, Mon.-Wed.||In-person workshop, Melbourne, Australia: Applying the Rasch Model in the Human Sciences: Introduction to Rasch measurement (Trevor Bond, Winsteps), Announcement|
|June 20-21, 2019, Thurs.-Fri.||In-person workshop, Melbourne, Australia: Applying the Rasch Model in the Human Sciences: Advanced Rasch measurement with Facets (Trevor Bond, Facets), Announcement|
|June 28 - July 26, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|July 2-5, 2019, Tue.-Fri.||2019 International Measurement Confederation (IMEKO) Joint Symposium, St. Petersburg, Russia,https://imeko19-spb.org|
|July 11-12 & 15-19, 2019, Thu.-Fri.||A Course in Rasch Measurement Theory (D.Andrich), University of Western Australia, Perth, Australia, flyer - http://www.education.uwa.edu.au/ppl/courses|
|Aug 5 - 10, 2019, Mon.-Sat.||6th International Summer School "Applied Psychometrics in Psychology and Education", Institute of Education at HSE University Moscow, Russia.https://ioe.hse.ru/en/announcements/248134963.html|
|Aug. 9 - Sept. 6, 2019, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|August 25-30, 2019, Sun.-Fri.||Pacific Rim Objective Measurement Society (PROMS) 2019, Surabaya, Indonesia https://proms.promsociety.org/2019/|
|Oct. 11 - Nov. 8, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Nov. 3 - Nov. 4, 2019, Sun.-Mon.||International Outcome Measurement Conference, Chicago, IL,http://jampress.org/iomc2019.htm|
|Jan. 24 - Feb. 21, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|May 22 - June 19, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 26 - July 24, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 7 - Sept. 4, 2020, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 9 - Nov. 6, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 25 - July 23, 2021, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
The URL of this page is www.rasch.org/rmt/rmt111k.htm