1. If you are tracking the persons. Then you need to measure them in the same frame of reference at time 1 and time 2. Usually one of the two times is more decisive. In healthcare, it is time 1 because that is when treatement decisions are made. In education, it is usually time 2 because that is when success/failure decisions are made. Analyze the data from the decisive time point obtain the person measures. Anchor the item difficulties, then analyze the other time point to obtain the comparable set of person measures.
2. If you are tracking the differential impact of the intervention on the items, for instance, which items are learned and which aren't. Then rack the data.
3. If you are tracking the differential impact of the intervention on the persons, for instance which persons benefited and which didn't. Then stack the data.
4. If you are tracking how the instrument changes its functioning (Differential Test Functioning) between time 1 and time 2, then perform separate analyses.
There is more at Racking and Stacking.
Measures of the same persons are often obtained at two time-points, or under two conditions, with the intention of investigating changes.
Consider patients being assessed for level of independent functioning on entering (Admission) and leaving (Discharge) rehabilitation. Each patient has two sets of observations. A useful approach is to convert each set of observations for each patient into a measure, with all measures in the same frame of reference. This would be exactly the same as measuring all their heights at admission and discharge. Then a patient's change in level of functioning would simply be the difference between the admission and discharge measures.
This can be done by "stacking" the data. Each set of observations for each patient is appended to the data file as a further case. Finally, the data file contains twice as many cases as there are patients. Measures are constructed on all cases simultaneously.
Stacking or Anchoring?
The person samples at both time-points must e measured on the same "ruler" so that
Time 1-to-Time 2 changes can be measured. The choice is:
1) measure at Time 1, anchor at time 2.
This emphasizes that decisions are made at Time 1 (Admission, Pre-Test).
2) measure at Time 2, anchor at Time 1.
This emphasizes the goal toward which the intervention is aiming (Discharge, Post-Test).
3. Combined analysis of the Time 1 and Time 2 data.
This takes the more "overall" position that all data are equally important.
Patient Dependency? (Time Series, Repeated Measures)
A question often raised is "Doesn't putting the same patients in twice introduce dependency?" It probably does in a small way. But let's think about the situation:
1. The patients are not identical patients. They have changed.
2. There are many sources of dependency within the data. The dependency among patients with similar diagnoses at Time 1 may be greater than the dependency between the same patients at Time 1 and Time 2.
What is the effect of dependency on Rasch measurement? The data are no longer as random in the way that the Rasch model predicts. Lack of randomness can increase misfit (if the dependency is generally in observations that are unexpected according to model predictions). It can lessen misfit (if the dependency is generally in observations close to model-predictions). Increased misfit reduces sample reliability and separation, making differences smaller in terms of logits. Decreased misfit increases sample reliability and separation, making differences larger in terms of logits.
If there is a large correlation between individuals' off-dimensional (= Rasch residual) performances
at Time 1 and Time 2 an approach is:
1. Select for each person, at random, one of the Time 1 and Time 2 data records.
2. Analyze these records with Rasch. Since there is only one record for each person, there is no Time 1 vs. Time 2 person dependency.
3. Save the item measures and rating-scale structures.
4. Analyze all the data (stacked: Time 1 + Time 2) anchored with item measures and rating-scale structures from 3.
The unbiased item difficulties are used to measure all the Time 1 and Time 2 performances in the same measurement frame-of-reference.
See also Repeated Measure Designs (Time Series) and Rasch.
In practice, dependency between Time 1 and Time 2 is difficult to identify for individuals, except those grossly misfitting at both time points. Dependencies across time points often are present, but they cluster across patients within items. This brings us to ...
With current Rasch software, this same information about relative changes in item difficulty can be obtained by doing an item DIF (differential item functioning) analysis based on the Time 1 and Time 2 person samples.
In the physical sciences, great effort is exerted to prevent the measuring device from changing. But often, in the social sciences this is not possible. The construct hierarchy changes between Time 1 and Time 2. Physicists would despair, but social scientists can rejoice because this provides a special insight into what has changed.
Between Time 1 and Time some intervention has occurred or some other change has happened. It is unlikely to affect the responses to all items equally. Some items will relate directly to the therapy, teaching or intervention, others will not.
Imagine that the patients have not changed, but the effect of the intervention is to change the items. "I'm still the same person, but now climbing stairs is easier!" Then each person is entered once into the data, but each item twice: once for Time 1 and once for Time 2. This is "racking" the data.
In this racked analysis, the item difficulties are of greater interest than the person measures. Items with the biggest change in measure are those on which the intervention has had the greatest effect.
A Practical Example
The Functional Independence Measure, FIM( was administered to 500 stroke patients at admission to, and discharge from, rehabilitation. The stacked data of 1000 FIM administrations was analyzed. The Figure shows each person's ability at discharge plotted against that person's ability at admission. The patients improved by 0.75 logits on average. Some toward the top left have gained more than 2 logits (above the dashed line). A few have regressed (below the solid identity line).
In the racked data, 500 patients were administered the 18 item FIM twice, so there are 36 items. The plot shows how the change in patient status is reflected in the measures of the items at the two time points. Rehabilitation has had the biggest impact of 0.75 logits on the motor items, A - M. There has been less impact, 0.35 logits, on the mental items, N-R.
Stacking the data, we see who has changed. Racking the data, we see what has changed.
Benjamin D. Wright
Rack and Stack: Time 1 vs. Time 2 or Pre-Test vs. Post-Test. B.D. Wright Rasch Measurement Transactions, 2003, 17:1, 905-906
Please help with Standard Dataset 4: Andrich Rating Scale Model
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|March 31, 2017, Fri.||Conference: 11th UK Rasch Day, Warwick, UK, www.rasch.org.uk|
|April 2-3, 2017, Sun.-Mon.||Conference: Validity Evidence for Measurement in Mathematics Education (V-M2Ed), San Antonio, TX, Information|
|April 26-30, 2017, Wed.-Sun.||NCME, San Antonio, TX, www.ncme.org - April 29: Ben Wright book|
|April 27 - May 1, 2017, Thur.-Mon.||AERA, San Antonio, TX, www.aera.net|
|May 26 - June 23, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 30 - July 29, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|July 31 - Aug. 3, 2017, Mon.-Thurs.||Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br|
|Aug. 7-9, 2017, Mon-Wed.||In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia|
|Aug. 7-9, 2017, Mon-Wed.||PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/|
|Aug. 10, 2017, Thurs.||In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm|
|Aug. 11 - Sept. 8, 2017, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Aug. 18-21, 2017, Fri.-Mon.||IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org|
|Sept. 15-16, 2017, Fri.-Sat.||IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm|
|Oct. 13 - Nov. 10, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 5 - Feb. 2, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 10-16, 2018, Wed.-Tues.||In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement|
|Jan. 17-19, 2018, Wed.-Fri.||Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website|
|May 25 - June 22, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 27, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 10 - Sept. 7, 2018, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 12 - Nov. 9, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|The HTML to add "Coming Rasch-related Events" to your webpage is:|
The URL of this page is www.rasch.org/rmt/rmt171a.htm