Dichotomous Rasch Model derived from Specific Objectivity

Specific Objectivity1 is the requirement that the measures produced by a measurement model be sample-free for the agents (test items) and test-free for the objects (people). Sample-free measurement means "item difficulty estimates are as independent as is statistically possible of whichever persons, and whatever distribution of person abilities, happen to be included in the sample." Test-free measurement means "person ability estimates are as independent as is statistically possible of whichever items, and whatever distribution of item difficulties, happen to be included in the test." In particular, the familiar statistical assumption of a normal (or any known) distribution of model parameters is not required.

This also implies that Rasch point-estimates are invariant when the data fit the Rasch model. "The argument for invariance may be stated rather loosely as follows. Irrelevancies in the data should not make a fundamental difference in the results obtained from the analysis of the data." (International Encyclopedia of Statistics, art. Estimation: point estimation). For Rasch measurement, irrelevancies include the person and item distributions.

Comparison of performances.

Essential to the concept of measurement is that of comparison. A measurement is the quantification of a specifically defined comparison. Consequently, it is necessary that we define the nature of the comparison for which we intend to obtain measures.

In examining the performance of a person on a test, we can expect that the greater the length of the test, the greater will be the numerical difference between the count of right answers and the count of wrong answers. But for a test consisting of homogeneous items, we do expect that the ratio of the count of right answers to that of wrong answers will remain approximately constant, i.e. If the test were to be doubled in length, the ratio between the successes and failures would remain about the same. Consequently, a ratio is the type of comparison for which we desire to construct measures.

Objective measures from paired observations.

This derivation is based on the hypothetical administration of numerous replications of the same item to two people. After such administrations, we have the following contingency table:

                    Person n
                 right    wrong
                --------------
      right     RnRm     WnRm
 Person m
      wrong     RnWm     WnWm

where RnRm is the count of times when both persons n and m answer correctly etc.

In those instances when both n and m answer correctly, or incorrectly, we detect no difference in their performance. Consequently the only informative contrast of their performance is the comparison between RnWm and WnRm. The ratio of these terms is the comparison we want.

Let us then consider RnWm/WnRm. This ratio is a comparison of the frequencies of success of the two people on the item in question. In the limit, this comparison of frequencies becomes the comparison of their probabilities of success multiplied by the number of replications. However, as the replications are the same for both cells, their count cancels out. Thus:

  RnWm      (Pni)*(1-Pmi)
  ----  =   -------------
  WnRm      (1-Pni)*(Pmi)

where we have used i to indicate the particular item being replicated, and Pni to indicate the probability of success of person n on this item i. Thus 1-Pni is the corresponding probability of failure.

Use of Objectivity:

What happens when we require this comparison to maintain objectivity? Then the comparison of the performance of persons n and m must not depend on which particular item we use to compare them, i.e., the parameters must be "separate". If we choose to use item j we must obtain the same result. Expressing this algebraically:

  (Pni)*(1-Pmi)    (Pnj)*(1-Pmj)
  -------------  = -------------  for all i,j
  (1-Pni)*(Pmi)    (1-Pnj)*(Pmj)

Rewriting

   (Pni)              (Pnj)*(1-Pmj)     (Pmi)
   -------        =   -------------  *  ---------
   (1-Pni)            (1-Pnj)*(Pmj)     (1-Pmi)

However, again by objectivity, the interaction of person n and item i must not depend on which person m and which item j is used for comparison in the measuring process. Consequently we can choose the (ability) measure of person 0 to define the frame of reference for the persons and the calibration (measure) of item 0 to define the frame of reference for the items and so provide fixed reference points on the person and item measurement scales. Thus

(Pni)            (Pn0)*(1-P00)      (P0i)
-------      =   -------------  *  -------
(1-Pni)          (1-Pn0)*(P00)     (1-P0i)

then

(Pni)             (Pn0)     (P0i)      (1-P00)
-------      =   ------- * -------  *  -------
(1-Pni)          (1-Pn0)   (1-P0i)      (P00)

             =     f(n)  *   g(i)   * constant

This is a multiplicative model, in which we can bring the frames of reference for persons and items into conjunction by choosing the reference item and person such that P00 = 0.5 which makes the constant term 1.

The measurement scale now defined has the properties of a ratio scale, in which a zero corresponds to the measure for a person having no chance of success on any item with a non-zero measure and the calibration of an item on which there is no chance of success by any person with a non-zero measure.

Within the frame of reference now specified, (Pn0)/(1-Pn0) has a value between 0 and infinity depending only on person n, and (P0i)/(1-P0i) has a value between 0 and infinity depending only on item i.

The ratio scale defined by (Pni/(1-Pni)) can be transformed into an equal-interval linear scale with a logarithmic function, so that
loge(Pn0/(1-Pn0) = Bn
loge (P0i/(1-p0i)) = -Di
and loge(Pni/(1-Pni)) = Bn - Di
or Pni = exp(Bn - Di) / (1 + exp(Bn - Di))

where the item calibration Di is dependent only on the attributes of item i, which we can now call its difficulty, and Bn is the measure dependent only on the attributes person n, which we can call his ability, and the choice of P00 as 0.5 produced a constant of value 1 with logarithm of 0.

This model relating the ability of person n and the difficulty of person i to the performance of person n on item i is the objective model of measurement known as the Rasch model.


1 "On Specific Objectivity: An Attempt at Formalizing the Request for Generality and Validity of Scientific Statements." Georg Rasch, 1977 Memo 18.

5. However, if this globality within A holds for any two objects O1 and O2 in O, we shall characterize pairwise comparisons of objects as defined by (V:4) as specifically objective within the frame of reference F.

The term "objectivity" refers to the fact that the result of any comparison of two objects within O is independent of the choice of the agent A within A and also of the other elements in the collection of objects O; in other words: independent of everything else within the frame of reference than the two objects which are to be compared and their observed reactions.

And the qualification "specific" is added because the objectivity of these comparisons is restricted to the frame of reference F defined in (V:1). This is therefore denoted as the frame of reference for the specifically objective comparisons in question.

This also makes clear that the specific objectivity is not an absolute concept, it is related to the specified frame of reference.

It also deserves mention that this definition concerns only comparisons of objects, but within the same frame of reference it can be applied to comparisons of agents as well.


Dichotomous Rasch model derived from specific objectivity. Wright BD, Linacre JM. … Rasch Measurement Transactions, 1987, 1:1 p.5-6


  1. The Rasch Model derived from E. L. Thorndike's 1904 Criteria, Thorndike, E.L.; Linacre, J.M. … 2000, 14:3 p.763
  2. Rasch model derived from consistent stochastic Guttman ordering, Roskam EE, Jansen PGW. … 6:3 p.232
  3. Rasch model derived from Counts of Right and Wrong Answers, Wright BD. … 6:2 p.219
  4. Rasch model derived from counting right answers: raw Scores as sufficient statistics, Wright BD. … 1989, 3:2 p.62
  5. Rasch model derived from Thurstone's scaling requirements, Wright B.D. … 1988, 2:1 p. 13-4.
  6. Rasch model derived from Campbell concatenation: additivity, interval scaling, Wright B.D. … 1988, 2:1 p. 16.
  7. Dichotomous Rasch model derived from specific objectivity, Wright BD, Linacre JM. … 1987, 1:1 p.5-6



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 6, 2024, Fri.-Fri. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

The URL of this page is www.rasch.org/rmt/rmt11a.htm

Website: www.rasch.org/rmt/contents.htm