Does objective expertise in judging acting ability exist? What distinguishes experts' judgments from the judgments of others? To investigate this, I compared the ratings given by experts, theater buffs, and novices to high school students' videotaped performances of Shakespearean monologues. The experts were casting directors from Chicago theaters or high school drama teachers who had spent many hours evaluating actors' abilities. The theater buffs were not formally trained in drama but frequented professional theater, read acting reviews, and enjoyed talking about drama. Novices seldom attended the theater, rarely read reviews, and had little experience with drama.
These judges rated videotapes with the Judging Acting Ability Inventory, a 36-item rating instrument designed to measure the technical aspects of acting, such as vocal technique and body movement, and the emotional and creative aspects of acting needed to build an effective characterization. The judges repeated the rating task one month later so that the stability of aesthetic judgments over time could also be investigated. I hypothesized that there would be significant differences between the judge groups in their item calibrations, measures of actors' abilities, and judge severities.
The data was analyzed with the FACETS program. Ray Adams devised a chi-square test for rating consistency, analogous to Hedges and Olkin (1985). The advantage of Ray's technique over ANOVA is that the calculation of this chi-square takes into consideration not only each calibration but also its standard error.
To my surprise, I found that the three groups shared a common understanding of nearly all the items and employed those items consistently when judging performances. There were only four items which had significantly different calibrations across the three groups. Additionally, the items performed in a stable manner for each of the groups across rating occasions. Buffs and novices used the rating criteria in the same way as experts when those criteria were explicit and couched in understandable language.
Yet there were also some noticeable differences between the three groups. First, experts were the most severe while novices were the most lenient. Second, the groups rated certain performances differently. Experts and buffs gave three actors significantly lower measures than novices did. Those three actors portrayed characters in mourning, and their characterizations were emotionally charged. Novices seemed to base their judgments upon a single criterion - the actor's ability to display intense emotion - and were unaware of the technical shortcomings of the performance. By contrast, experts and buffs seemed to view an actor from a number of perspectives and were not overwhelmed by the emotionalism displayed. Third, experts were better able to replicate their ratings one month later than buffs and novices. All three groups showed some change across time, but the amount of change for buffs was nearly twice that for experts, while the amount of change for novices was nearly twice that again.
This study breaks new ground by examining aesthetic judgment in the performing arts. It is a step towards the construction of an objective measurement system which drama teachers can employ to assess student growth in acting ability. Through the behavior of an intermediate group of judges, the theater buffs, it gives insight into the transition from novice to expert judge.
Hedges LV, Olkin I 1985 Statistical methods for meta-analysis. New York:Academic Press
Myford CM 1989 The nature of expertise in aesthetic judgment. Ph.D. dissertation, Univ. of Chicago. Dissertation Abstracts International, 50, 3562A
Rasch Measures Hamlet, C Myford Rasch Measurement Transactions, 1990, 4:2 p.105
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|Oct. 11 - Nov. 8, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Nov. 3 - Nov. 4, 2019, Sun.-Mon.||International Outcome Measurement Conference, Chicago, IL, http://jampress.org/iomc2019.htm|
|Nov. 15, 2019, Fri.||XIII International Workshop "Rasch Models in Business Administration", IUDE of Universidad de La Laguna. Tenerife. Canary Islands. Spain, https://www.ull.es/institutos/instituto-universitario-empresa/|
|Jan. 30-31, 2020, Thu.-Fri.||A Course on Rasch Measurement Theory - Part 1, Sydney, Australia, course flyer|
|Feb. 3-7, 2020, Mon.-Fri.||A Course on Rasch Measurement Theory - Part 2, Sydney, Australia, course flyer|
|Jan. 24 - Feb. 21, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Apr. 14-17, 2020, Tue.-Fri.||International Objective Measurement Workshop (IOMW), University of California, Berkeley, https://www.iomw.org/|
|May 22 - June 19, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 26 - July 24, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 1, 2020, Mon.-Wed.||Measurement at the Crossroads 2020, Milan, Italy , https://convegni.unicatt.it/mac-home|
|July 1 - July 3, 2020, Wed.-Fri.||International Measurement Confederation (IMEKO) Joint Symposium, Warsaw, Poland, http://www.imeko-warsaw-2020.org/|
|Aug. 7 - Sept. 4, 2020, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 9 - Nov. 6, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 25 - July 23, 2021, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
The URL of this page is www.rasch.org/rmt/rmt42c.htm