Comparing Measures: Scatterplots

Comparison of measures is frequently required. Item calibrations are compared when tests are equated to investigate stability. Person measures are compared to investigate change. Here are some techniques of comparison.

A necessary step before final conclusions can be drawn is to establish and maintain the frame of reference. Suppose the pre-test is Form E and the post-test is Form H. Simply subtracting a mean ability measure at pre-test from a mean ability at post-test is misleading as a measure of change for many reasons. The most obvious is that pre-test measures are relative to Form E and post-test measures are relative to Form H. The two Forms must be equated into a single frame of reference, i.e., with a shared origin and scale. Only then can valid comparisons between pre- and post-test measures be made. This equating may be an iterative procedure in which the following techniques can help.

A. Numerical comparisons. A.1 Direct comparisons:
If Bn is the measure of subject n, and Bm the measure of subject m, the obvious comparison is Bn-Bm. If SEn and SEm are the standard errors of Bn and Bm, then the SE of Bn-Bm = sqrt((SEn^2+SEm^2). A two-tailed test of the null hypothesis "Bn-Bm=0" is rejected at the .05 level when
|Bn-Bm| >= 1.96*sqrt((SEn^2+SEm^2)
>= 2.8 * SEn when SEn = SEm,
and at the .01 level when
|Bn-Bm| >= 2.58*sqrt((SEn^2+SEm^2)
>= 3.7 * SEn when SEn = SEm.

A.2 Normative comparisons:
It is often of interest to discover whether, as a group changes, individuals are changing faster or slower than their group. Suppose B1 is the measure of a subject at time 1, when the mean of the N measures is G1, and B2 is the measure of the same subject at time 2, when the group mean is G2. Then the gain of the subject relative to the group is (B2-G2)-(B1-G1). The error variances of the () terms have the form
SEBG1^2 = SEB1^2 + SDG1^2/(N-1).
The SE of the gain = sqrt((SEBG1^2+SEBG2^2).

A.3 Comparisons requiring rescaling:
Tests X and Y contain some equivalent items, but, due to different candidate behavior, rating scale specifications or analytical decisions, the logit units of the measurement scales estimated for the two tests have different lengths.

Suppose there are C items common to both tests. Their mean and variance in Test X are MX and VX, and in Test Y, MY and VY. Then the rescaling factor to place measures on Test Y into the scale of Test X is
SCALEYX = sqrt((VX/VY)
so that
Comparisons can now be made between BX and B'Y in the metric of Test X.

A.4 Evaluation of group stability:
Suppose there are C common elements in tests X and Y. Whether to attribute the inevitable differences in pairs of measures solely to measurement error is a statistical question. A test of the null hypothesis that the differences are attributable to measurement error is the homogeneity chi-square:


B. Graphical comparisons.
B.1 Graphing without common scaling:
The numerical calculations can be summarized in a scatterplot of measures and confidence bands. Each element has two measures, BX and BY, and two standard errors, SEX and SEY.

Plotting confidence intervals is a two-stage process. For conventional 95% two-tailed confidence bands, the pairs of points are located at {UX, UY} for the upper confidence band and {LX, LY} for the lower band, where

Avge X Y

Drawing confidence bands

When you plot the {UX, UY} points, they will form a curved cloud (Figure 1). The points are exact, but provide too much detail. To expedite conceptualization, draw a smooth curve through these points with a thick felt-tip pen or computer drawing tool. Then erase the points. Repeat this for the {LX,LY} points. This draws the 95% confidence bands. Now plot your {BX,BY} data points (see Figure 2 (in printed text)). The identity line goes through {MEANX+AVGEX, MEANY+AVGEY}.

Identifying statistical outliers

B.2 Graphing with local scaling:
In Figure 2, the best fit line goes through points {MEANX+AVGEX-SDX, MEANY+AVGEY-SDY} and {MEANX+AVGEX+SDX, MEANY+AVGEY+SDY}. If this forms a noticeable angle with an identity line, then plot the confidence bands with the X and Y axes separately scaled. The locally scaled confidence bands are plotted in the same way as in B.1, but using these values:

Mean X Y

The best fit line now falls evenly between the bands.

Comparing measures: scatterplots. Luppescu S. … Rasch Measurement Transactions, 1995, 9:1 p.410

Please help with Standard Dataset 4: Andrich Rating Scale Model

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from welcomes your comments:

Your email address (if you want us to reply):


ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website,

Coming Rasch-related Events
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago,
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Oct. 25-27, 2017, Wed.-Fri. In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY,
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src=""></script>


The URL of this page is