These letters illustrate how Lord's and Wright's explorations intersected and then diverged, with Lord basing his thinking on the normal ogive and the frailties of empirical data, but Wright basing his on objective measurement and the demands of an ideal model.
Letter from Frederic M. Lord to Benjamin D. Wright, November 18, 1965: "Rasch's model for unspeeded tests [the Rasch dichotomous model] can be considered as a special case of the normal-ogive model, as Rasch himself points out extremely briefly at the end of Section 8 of his Chapter VII. The usual normal-ogive model has two parameters for each item, whereas Rasch uses only one of these. Rasch's model is thus a somewhat special case. Birnbaum's [2-PL] logistic model seems to provide a very satisfactory approximation to the normal-ogive model with two parameters per item. Altogether, we are devoting six chapters to the normal-ogive model and to Birnbaum's logistic model in our book ["Statistical Theories of Mental Test Scores"]."
Ben Wright to Fred Lord, November 23, 1965: "About Rasch's item analysis model as described in the latter part of his book, I think he would be horrified to learn that you regard his model as a special case of the normal-ogive model. The special feature of his model is that it allows for separating parameters of objects and agents, that is of children and test items. This is not possible with the normal-ogive model, and, in fact, if one sets down a few reasonable characteristics of objectivity, it can be proven that in the special case where observations are limited to ones and zeros, that the Rasch item analysis model is the only model which retains parameter separability. From Rasch's point of view this separability is a sine qua non for objective measurement."
Fred Lord of Ben Wright, November 26, 1965: "I am aware of the virtue of Rasch's model, which he elucidates very well in his chapter in the Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability. On the other hand, it is quite clear that his model cannot really apply to the types of test items usually used in our tests. We all know that test items can have the same difficulty level and still differ very much in discriminating power -- some items have high discriminating power and some have none at all. This means that the item characteristic curves of typical test items frequently cross each other. In Rasch's model, it is impossible for the characteristic curves to intersect (except, of course, at the extreme ends where all curves meet in the same points).
"This leaves us with a dilemma. Shall we have objective measurement, which does not really hold for the test items we use? Or shall we allow the term measurement to include what we get from actual test items? I suppose one possible solution would be to discard all of those items that violate Rasch's assumptions. This possibility would certainly be an interesting one to explore."
Melvin R. Novick to Ben Wright, November 30, 1965: "I enjoyed reading your comment [in the letter to Lord] on Rasch's work as I too share a certain enthusiasm for it despite certain reservations and qualifications. ... More to the point, however, is Birnbaum's demonstration (see page 15 of part V of our text that the third Rasch model is a special case of his more general logistic model which obtains when all items have the same discriminating power. Since few tests are composed of items all having the same discriminating power, the practical utility of the third Rasch model would seem to be limited."
Ben Wright to Fred Lord, December 3, 1965: "If you write out Rasch's model for the binary case, that is where the alternative answers are right and wrong, and introduce a second item parameter, ... you can then take account of the variation and discriminating power of the items. This puts the model into the situation of there being one person parameter and two item parameters. The situation has a slightly unfortunate consequence as far as the estimation of item parameters are concerned. At least at present it seems to me that they now cannot be estimated entirely independently of the standardizing population.
"The other line, of course, is the one that you end up with and that is to only to accept items which conform to the simpler model, that is where the second parameter ... are all the same, let us say all one. Rasch believes that this is the only case where full objectivity can be reached. He has developed a proof which shows that only models of his kind, or models which reduce in a trivial way to his kind, allow for the specific objectivity in which he is interested.
"Should this proof stand the test of other people's scrutiny, well then I think the solution to discard all items that violate the Rasch assumptions may be the most attractive one and may even come to define the domain in which objective measurement is possible."
Ben Wright to Fred Lord, June 12, 1967: "Is there any reason for working for a normal ogive rather than a logistic ogive, or to put it in another way, is there a reason worth the added computing difficulty of working with the normal ogive?"
Fred Lord to Ben Wright, June 20, 1967: "You asked about the relative merits of the normal-ogive and logistic models. It is true that there is better a priori reason to use the normal ogive than the logistic; on the other hand, the difference between the two is so small that it would be very difficult to prove that one model was better than the other. The real answer to the dilemma is surely both models are wrong. Since they are so much alike, it seems futile to wonder whether one is slightly more wrong than the other. For this reason, I would use whichever is most convenient, until such time as we know a better model to use."
Fred Lord and Ben Wright discuss Rasch and IRT Models, F. Lord & B.D. Wright ... Rasch Measurement Transactions, 2010, 24:3 p. 1289-90
Please help with Standard Dataset 4: Andrich Rating Scale Model
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|June 30 - July 29, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|July 31 - Aug. 3, 2017, Mon.-Thurs.||Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br|
|Aug. 7-9, 2017, Mon-Wed.||In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia|
|Aug. 7-9, 2017, Mon-Wed.||PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/|
|Aug. 10, 2017, Thurs.||In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm|
|Aug. 11 - Sept. 8, 2017, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Aug. 18-21, 2017, Fri.-Mon.||IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org|
|Sept. 15-16, 2017, Fri.-Sat.||IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm|
|Oct. 13 - Nov. 10, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 5 - Feb. 2, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 10-16, 2018, Wed.-Tues.||In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement|
|Jan. 17-19, 2018, Wed.-Fri.||Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website|
|April 13-17, 2018, Fri.-Tues.||AERA, New York, NY, www.aera.net|
|May 25 - June 22, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 27, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 10 - Sept. 7, 2018, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 12 - Nov. 9, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|The HTML to add "Coming Rasch-related Events" to your webpage is:|
The URL of this page is www.rasch.org/rmt/rmt243a.htm