How to Assign Item Weights: Item Replication or Rating Scales?

Recommendation: If the additional weight is intended to indicate a higher level of performance, then use a rating scale.
If the additional weight is intended to indicate replications of the same level of performance, then use item weighting.
If the additional weight is merely to make the scores look nicer, then use unit weighting.

    Examples: a dichotomous item is scored 0-4 instead of 0-1:
  1. Score levels 1,2,3 exist conceptually, but are not observed in these data. Analyze 0-4 as a rating scale or partial credit item. (In Winsteps, STKEEP=Yes, IWEIGHT=1)
  2. 0-4 is specified because this item is considered to be 4 times as important as a 0-1 item. Analyze as 0-1 but give the item a weight of 4 or 4 replications in the data. (In Winsteps, STKEEP=No, IWEIGHT=4)
  3. 0-4 is specified because there are 25 items and we want the raw score range to be 0-100. Analyze as 0-1 but report the raw scores as 0-4. (In Winsteps, STKEEP=No, IWEIGHT=1)

In general, each observation is expected to be an independent and equal witness to examinee ability. The scientific motivation for this expectation is comparable to the motivations for random sampling and randomization. The introduction of arbitrary emphases, such as item weights, degrades the inferential stability of results and biases conclusions in an unreproducible way.

In the political world of examinations, however, some observations are decreed more important than others. For instance, if a pass- fail decision is to be made on the composite outcome of a 100 item MCQ test and one essay graded from 0 to 10, then the examination board may decide to assign the essay rating a weight 10 times heavier in order to give the essay and the MCQ items supposedly "equal" weight in the final decision.

Should you fall victim to such a decree, there are several ways the weights can be implemented with Rasch computer programs. Since each method has its drawbacks, initial data screening and quality control should proceed as though no weights existed. Once the measurement process has been validated, the following assignment methods may help:

1. The essay ratings and the MCQ items are analyzed separately, yielding two ability measures for each examinee. If there is insufficient overlap among the essay ratings, then additional constraints are required, such as modelling the ratings as binomial trials, and asserting that each grader is equally severe in order for a coherent set of essay measures to be produced. For the pass- fail decision, a weighted sum of the pairs of ability measures is used " the precise formula will be complicated by the different logit ranges of the two variables. The way to see what to do is to plot MCQ vs. Essay measures, and then to draw on this plot the line that best asserts the conjoint judgment of the standard setting committee. This method is the most comprehensible.

2. Each essay rating is entered 10 times (or each essay is given a weight of 10 times), and then the MCQ items and the essay ratings are analyzed together. This diminishes local independence among the observations but avoids the complication of two measurement scales. The replicated data will make the reported standard errors too small. In this example, they should be inflated about 75%. The 10 essay difficulties will be reported at about the same location on the variable as the one original essay difficulty.

3. Use explicit item weights, e.g., using IWEIGHT= in Winsteps, but adjusting the item weights to maintain approximately correct standard errors and score range. The original score range is 0-110. The essay is to be upweighted 10 times. This would give a score range 0-200. So to keep the meaningful score range, the weights needs to be adjusted by 110/200 = .55. So each MCQ item is weighted .55, and the essay item is weighted 5.50. This method is operationally the simplest.

4. Each essay rating is multiplied by 10, and then the rescaled 0- 100 essay ratings are analyzed with the MCQ items. Since only every 10th category of the 0-100 essay rating scale is observed, the analysis must allow for structurally present, but empirically absent, categories (Wilson RMT 5:1 p. 128). Again, standard errors will need to be inflated about 75% due to the effect of the fictitious categories. Only one essay difficulty will be reported, but it will not be at the same location on the variable as the 0-10 essay would have been. By convention, the difficulty of a rating scale item is chosen so that the sum of the step difficulties is zero, i.e., at the location on the variable where the highest and lowest possible ratings on the item are equally probable. If the difficulty of the 0-10 essay item is D logits from the center of the person ability distribution, the difficulty of the 0-100 essay item will be much closer to the mean ability, only about D/10 logits away. This makes the construct harder to understand, and can be confusing if the assigned weights are changed.


Assigning item weights: Item Replication or Rating Scales?. Linacre JM, Wright BD. … Rasch Measurement Transactions, 1995, 8:4 p.403



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 2nd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
Oct. 31 - Nov. 28, 2014, Fri.-Fri. On-line workshop: Rasch Applications, Part 2: Clinical Assessment, Survey Research, and Educational Measurement (W. Fisher), www.statistics.com
Nov. 14, 2014, Fri. In-person workshop: IX Workshop on Rasch Models in Business Administration, Tenerife, Canary Islands, Spain, www.institutos.ull.es/viewcontent/institutos/iude/46416/es
Nov. 30, 2014, Sun. Submission deadline: 6th Rasch Conference: Sixth International Conference on Probabilistic Models for Measurement in Education, Psychology, Social Science and Health, Cape Town, South Africa www.rasch.co.za/conference.php
Dec. 3-5, 2014, Wed.-Fri. In-person workshop: Introductory Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric
Jan. 2-30, 2015, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 12-14, 2015, Mon.-Wed. 6th Rasch Conference: Sixth International Conference on Probabilistic Models for Measurement in Education, Psychology, Social Science and Health, Cape Town, South Africa www.rasch.co.za/conference.php
March 11-13, 2015, Wed.-Fri. In-person workshop: Introductory Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric
March 20, 2015, Fri. UK Rasch User Group Meeting, London, United Kingdom, www.rasch.org.uk
March 26-27, 2015, Thur.-Fri. In-person workshop: Introduction to Rasch Measurement with Winsteps (W. Boone), Cincinnati, raschmeasurementanalysis.com
April 16-20, 2015, Thurs.-Mon. AERA Annual Meeting, Chicago IL www.aera.net
April 21-22, 2015, Tues.-Wed. IOMC 2015: International Outcomes Measurement Conference, Chicago IL www.jampress.org
May 13-15, 2015, Wed.-Fri. In-person workshop: Introductory Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric
May 18-20, 2015, Mon.-Wed. In-person workshop: Intermediate Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric
May 29 - June 26, 2015, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
July 3-31, 2015, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 14 - Sept. 11, 2015, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Sept. 9-11, 2015, Wed.-Fri. In-person workshop: Introductory Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric
Sept. 14-16, 2015, Mon.-Wed. In-person workshop: Intermediate Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric
Sept. 17-18, 2015, Thur.-Fri. In-person workshop: Advanced Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric
Oct. 16 - Nov. 13, 2015, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Sept. 4 - Oct. 16, 2015, Fri.-Fri. On-line workshop: Rasch Applications, Part 1: How to Construct a Rasch Scale (W. Fisher), www.statistics.com
Oct. 23 - Nov. 20, 2015, Fri.-Fri. On-line workshop: Rasch Applications, Part 2: Clinical Assessment, Survey Research, and Educational Measurement (W. Fisher), www.statistics.com
Dec. 2-4, 2015, Wed.-Fri. In-person workshop: Introductory Rasch (A. Tennant, RUMM), Leeds, UK, www.leeds.ac.uk/medicine/rehabmed/psychometric
Aug. 12 - Sept. 9, 2016, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src="http://www.rasch.org/events.txt"></script>

The URL of this page is www.rasch.org/rmt/rmt84p.htm

Website: www.rasch.org/rmt/contents.htm