Evaluating Bookmark Judgments

There has been a great deal of work done on how to evaluate standard-setting procedures. Hambleton and Pitoniak (2006) suggested procedural, internal, and external criteria for evaluating standard-setting methods. Procedural criteria focus on implementation issues and documentation, internal criteria stress inter-panelist and intra-panelist consistency, and external criteria address comparisons to other methods and the reasonableness of the performance levels.

The two most popular methods for collecting judgments from standard-setting panelists are modified-Angoff and the bookmark procedure (Cizek and Bunch, 2007). The IRT-based bookmark procedure (Mitzel, Lewis, Patz, and Green, 2001 and Lewis, Mitzel, Green, 1996) is becoming the standard-setting method of choice in many statewide assessment programs, even though there has been less research conducted on bookmark methods as compared to modified-Angoff methods (Plake, 2007).

In a series of articles with my colleagues, I proposed using Rasch measurement theory to evaluate the quality of judgments obtained from standard-setting panelists (Engelhard & Anderson, 1998, Engelhard & Cramer 1997, Engelhard & Gordon, 2000, Engelhard & Stone, 1998). A summary of this approach is forthcoming (Engelhard, in press). This approach is based on the many-faceted Rasch (MFRM) model, and it incorporates many of the internal criteria described by Hambleton and Pitioniak (2006). The MFRM model can be used to evaluate the quality of standard-setting judgments obtained from bookmark panelists. The MFRM model for bookmark judgments is:

Loge [Pnijk / Pnij(k-1)] = bn - di - wj - tk [1]

Pnijk = probability of panelist n giving a bookmark
rating of k on item i for round j,
Pnij(k-1) = probability of panelist n giving a bookmark rating of k-1 on item i for round j,
bn = judged performance level for panelist n,
di = judged difficulty for item i,
wj = judged performance level for round j, and
tk = judged performance standard for bookmark rating category k relative to category k-1. The rating category coefficients, tk, defines the performance standards or cut scores.

In order to illustrate the MFR model, an example from Ferdous and Plake (2007) is presented in Table 1. There are six panelists providing bookmark ratings (performance levels from 1 to 4) for five items. The cell entries represent panelist judgments regarding the performance level of each item. The observed means for the items range from 1.00 to 3.67 reflecting the ordered items that would be listed in the ordered item booklet. The observed judgments range from 1.60 to 2.80 with Panelists 2 and 5 having the lowest view of performance and Panelist 4 with the most severe judgments of performance needed to succeed on these five items. This ordering is reflected in the estimated values for the b's and the d's.

This information is presented in the variable map in Figure 1. Both panelists and items are centered at zero, and round (only one round in the example) is not centered. The panelists range in interjudge agreement from 40.0% to 56.0%. The overall observed agreement is 48.0% with an expected agreement of 39.6% based on the model. Item 1 is not included in the agreement statistics because all of the panelists agreed to rate it in category 1.

Table 2 presents the category statistics. Within the framework described here, the measures for the category coefficients are defined as the performance standards or cut scores. This definition provides the opportunity to use several graphical displays for practitioners to understand panelist judgments. Figure 2 shows the category probability curves.

Ferdous and Plake (2007) report an interjudge inconsistency index of 36%. If we report this as a consistency or agreement index, then the value is 64%. This value is higher than the Rasch estimate of 48.0% because Item 1 is included in their estimates of interjudge consistency. The MFR model provides the opportunity to go beyond a single index of inter-judge consistency. It also makes available an array of model-data fit indices and graphical displays for exploring more deeply judgments of panelists using the bookmark procedure. Additional work is currently underway to explore the utility of this approach for evaluating bookmark ratings in a variety of standard-setting situations. Experience is still needed to determine whether or not the MFR model can provide a suite of internal criteria for examining bookmark judgments obtained from standard-setting panelists.

George Engelhard, Jr.
Emory University

Cizek, G.J., & Bunch, M.B. (2007). Standard setting: A guide to establishing and evaluating performance standards on tests. Thousand Oaks, CA: Sage

Engelhard, G. (in press). Evaluating the judgments of standard-setting panelists using Rasch measurement theory. In E. V. Smith, Jr., and G. E. Stone (Eds.), Applications of Rasch measurement in criterionreferenced testing, JAM Press.

Engelhard, G., & Anderson, D.W. (1998). A binomial trials model for examining the ratings of standard-setting judges. Applied Measurement in Educ., 11(3), 209-230.

Engelhard, G., & Cramer, S. (1997). Using Rasch Measurement to evaluate the ratings of standard-setting judges. In M. Wilson, G. Engelhard, & K. Draney. (Eds.). Objective Measurement: Theory into Practice, Volume 4 (pp. 97-112). Norwood, NJ: Ablex.

Engelhard, G., & Gordon, B. (2000). Setting and evaluating performance standards for high stakes writing assessments. In M. Wilson & G. Engelhard (Eds.), Objective Measurement: Theory into Practice, Volume 5 (pp. 3-14). Stamford, CT: Ablex.

Engelhard, G., & Stone, G.E. (1998). Evaluating the quality of ratings obtained from standard-setting judges. Educ. and Psychological Measurement, 58(2), 179-196.

Ferdous, A., & Plake, B. (2007). Interjudge inconsistency index for body of work, yes/no, and bookmark standard setting procedures. Retrieved September 2, 2007: www.unl.edu/buros/biaco/pdf/pres07ferdous01.pdf

Hambleton, R C., & Pitoniak, M.J. Setting performance standards. In R. Brennan (Ed.), Educational Measurement, 4th Ed. (pp. 433-470) Westport, CT: Praeger Publishers.

Lewis D.M., Mitzel, H. C., Green, D. R. (1996). Standard Setting: A Bookmark Approach. In D. R. Green (Chair), IRT-Based Standard-Setting Procedures Utilizing Behavioral Anchoring. Symposium presented at the 1996 Council of Chief State School Officers 1996 National Conference on Large Scale Assessment, Phoenix, AZ.

Mitzel, H.C., Lewis, D.M., Patz, R.J., & Green, D.R. (2001). The bookmark procedure: Psychological perspectives. In G.J. Cizek (Ed), Setting performance standards: Concepts, methods and perspectives (pp. 249- 281). Mahwah, NJ: Lawrence Erlbaum Assoc.

Plake, B.S. (2007, April). Standard setters: Stand up and take a stand! 2006 career award address presented at the annual NCME meeting, Chicago, IL.

Evaluating Bookmark Judgements. George Engelhard, Jr. … Rasch Measurement Transactions, 2007, 21:2 p. 1097-1098

Please help with Standard Dataset 4: Andrich Rating Scale Model

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):


ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
May 26 - June 23, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 30 - July 29, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY, www.aera.net
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src="http://www.rasch.org/events.txt"></script>


The URL of this page is www.rasch.org/rmt/rmt212a.htm

Website: www.rasch.org/rmt/contents.htm