Specific Objectivity^{1} is the requirement that the measures produced by a measurement model be sample-free for the agents (test items) and test-free for the objects (people). Sample-free measurement means "item difficulty estimates are as independent as is statistically possible of whichever persons, and whatever distribution of person abilities, happen to be included in the sample." Test-free measurement means "person ability estimates are as independent as is statistically possible of whichever items, and whatever distribution of item difficulties, happen to be included in the test." In particular, the familiar statistical assumption of a normal (or any known) distribution of model parameters is not required.
This also implies that Rasch point-estimates are invariant when the data fit the Rasch model. "The argument for invariance may be stated rather loosely as follows. Irrelevancies in the data should not make a fundamental difference in the results obtained from the analysis of the data." (International Encyclopedia of Statistics, art. Estimation: point estimation). For Rasch measurement, irrelevancies include the person and item distributions.
Comparison of performances.
Essential to the concept of measurement is that of comparison. A measurement is the quantification of a specifically defined comparison. Consequently, it is necessary that we define the nature of the comparison for which we intend to obtain measures.
In examining the performance of a person on a test, we can expect that the greater the length of the test, the greater will be the numerical difference between the count of right answers and the count of wrong answers. But for a test consisting of homogeneous items, we do expect that the ratio of the count of right answers to that of wrong answers will remain approximately constant, i.e. If the test were to be doubled in length, the ratio between the successes and failures would remain about the same. Consequently, a ratio is the type of comparison for which we desire to construct measures.
Objective measures from paired observations.
This derivation is based on the hypothetical administration of numerous replications of the same item to two people. After such administrations, we have the following contingency table:
Person n right wrong -------------- right RnRm WnRm Person m wrong RnWm WnWm
where RnRm is the count of times when both persons n and m answer correctly etc.
In those instances when both n and m answer correctly, or incorrectly, we detect no difference in their performance. Consequently the only informative contrast of their performance is the comparison between RnWm and WnRm. The ratio of these terms is the comparison we want.
Let us then consider RnWm/WnRm. This ratio is a comparison of the frequencies of success of the two people on the item in question. In the limit, this comparison of frequencies becomes the comparison of their probabilities of success multiplied by the number of replications. However, as the replications are the same for both cells, their count cancels out. Thus:
RnWm (Pni)*(1-Pmi) ---- = ------------- WnRm (1-Pni)*(Pmi)
where we have used i to indicate the particular item being replicated, and Pni to indicate the probability of success of person n on this item i. Thus 1-Pni is the corresponding probability of failure.
Use of Objectivity:
What happens when we require this comparison to maintain objectivity? Then the comparison of the performance of persons n and m must not depend on which particular item we use to compare them, i.e., the parameters must be "separate". If we choose to use item j we must obtain the same result. Expressing this algebraically:
(Pni)*(1-Pmi) (Pnj)*(1-Pmj) ------------- = ------------- for all i,j (1-Pni)*(Pmi) (1-Pnj)*(Pmj)
Rewriting
(Pni) (Pnj)*(1-Pmj) (Pmi) ------- = ------------- * --------- (1-Pni) (1-Pnj)*(Pmj) (1-Pmi)
However, again by objectivity, the interaction of person n and item i must not depend on which person m and which item j is used for comparison in the measuring process. Consequently we can choose the (ability) measure of person 0 to define the frame of reference for the persons and the calibration (measure) of item 0 to define the frame of reference for the items and so provide fixed reference points on the person and item measurement scales. Thus
(Pni) (Pn0)*(1-P00) (P0i) ------- = ------------- * ------- (1-Pni) (1-Pn0)*(P00) (1-P0i) then (Pni) (Pn0) (P0i) (1-P00) ------- = ------- * ------- * ------- (1-Pni) (1-Pn0) (1-P0i) (P00) = f(n) * g(i) * constant
This is a multiplicative model, in which we can bring the frames of reference for persons and items into conjunction by choosing the reference item and person such that P00 = 0.5 which makes the constant term 1.
The measurement scale now defined has the properties of a ratio scale, in which a zero corresponds to the measure for a person having no chance of success on any item with a non-zero measure and the calibration of an item on which there is no chance of success by any person with a non-zero measure.
Within the frame of reference now specified, (Pn0)/(1-Pn0) has a value between 0 and infinity depending only on person n, and (P0i)/(1-P0i) has a value between 0 and infinity depending only on item i.
The ratio scale defined by (Pni/(1-Pni)) can be transformed into an
equal-interval linear scale with a logarithmic function, so that
log_{e}(Pn0/(1-Pn0) = Bn
log_{e} (P0i/(1-p0i)) = -Di
and
log_{e}(Pni/(1-Pni)) = Bn - Di
or
Pni = exp(Bn - Di) / (1 + exp(Bn - Di))
where the item calibration Di is dependent only on the attributes of item i, which we can now call its difficulty, and Bn is the measure dependent only on the attributes person n, which we can call his ability, and the choice of P00 as 0.5 produced a constant of value 1 with logarithm of 0.
This model relating the ability of person n and the difficulty of person i to the performance of person n on item i is the objective model of measurement known as the Rasch model.
^{1} "On Specific Objectivity: An Attempt at Formalizing the Request for Generality and Validity of Scientific Statements." Georg Rasch, 1977
5. However, if this globality within A holds for any two objects O1 and O2 in O, we shall characterize pairwise comparisons of objects as defined by (V:4) as specifically objective within the frame of reference F.
The term "objectivity" refers to the fact that the result of any comparison of two objects within O is independent of the choice of the agent A within A and also of the other elements in the collection of objects O; in other words: independent of everything else within the frame of reference than the two objects which are to be compared and their observed reactions.
And the qualification "specific" is added because the objectivity of these comparisons is restricted to the frame of reference F defined in (V:1). This is therefore denoted as the frame of reference for the specifically objective comparisons in question.
This also makes clear that the specific objectivity is not an absolute concept, it is related to the specified frame of reference.
It also deserves mention that this definition concerns only comparisons of objects, but within the same frame of reference it can be applied to comparisons of agents as well.
Dichotomous Rasch model derived from specific objectivity. Wright BD, Linacre JM. … Rasch Measurement Transactions, 1987, 1:1 p.5-6
Please help with Standard Dataset 4: Andrich Rating Scale Model
Rasch Publications | ||||
---|---|---|---|---|
Rasch Measurement Transactions (free, online) | Rasch Measurement research papers (free, online) | Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch | Applying the Rasch Model 3rd. Ed., Bond & Fox | Best Test Design, Wright & Stone |
Rating Scale Analysis, Wright & Masters | Introduction to Rasch Measurement, E. Smith & R. Smith | Introduction to Many-Facet Rasch Measurement, Thomas Eckes | Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. | Statistical Analyses for Language Testers, Rita Green |
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar | Journal of Applied Measurement | Rasch models for measurement, David Andrich | Constructing Measures, Mark Wilson | Rasch Analysis in the Human Sciences, Boone, Stave, Yale |
in Spanish: | Análisis de Rasch para todos, Agustín Tristán | Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez |
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
May 26 - June 23, 2017, Fri.-Fri. | On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 30 - July 29, 2017, Fri.-Fri. | On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com |
July 31 - Aug. 3, 2017, Mon.-Thurs. | Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br |
Aug. 7-9, 2017, Mon-Wed. | In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia |
Aug. 7-9, 2017, Mon-Wed. | PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/ |
Aug. 10, 2017, Thurs. | In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm |
Aug. 11 - Sept. 8, 2017, Fri.-Fri. | On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com |
Aug. 18-21, 2017, Fri.-Mon. | IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org |
Sept. 15-16, 2017, Fri.-Sat. | IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm |
Oct. 13 - Nov. 10, 2017, Fri.-Fri. | On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Jan. 5 - Feb. 2, 2018, Fri.-Fri. | On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Jan. 10-16, 2018, Wed.-Tues. | In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement |
Jan. 17-19, 2018, Wed.-Fri. | Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website |
April 13-17, 2018, Fri.-Tues. | AERA, New York, NY, www.aera.net |
May 25 - June 22, 2018, Fri.-Fri. | On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 29 - July 27, 2018, Fri.-Fri. | On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com |
Aug. 10 - Sept. 7, 2018, Fri.-Fri. | On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com |
Oct. 12 - Nov. 9, 2018, Fri.-Fri. | On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The HTML to add "Coming Rasch-related Events" to your webpage is: <script type="text/javascript" src="http://www.rasch.org/events.txt"></script> |
The URL of this page is www.rasch.org/rmt/rmt11a.htm
Website: www.rasch.org/rmt/contents.htm