Does objective expertise in judging acting ability exist? What distinguishes experts' judgments from the judgments of others? To investigate this, I compared the ratings given by experts, theater buffs, and novices to high school students' videotaped performances of Shakespearean monologues. The experts were casting directors from Chicago theaters or high school drama teachers who had spent many hours evaluating actors' abilities. The theater buffs were not formally trained in drama but frequented professional theater, read acting reviews, and enjoyed talking about drama. Novices seldom attended the theater, rarely read reviews, and had little experience with drama.
These judges rated videotapes with the Judging Acting Ability Inventory, a 36-item rating instrument designed to measure the technical aspects of acting, such as vocal technique and body movement, and the emotional and creative aspects of acting needed to build an effective characterization. The judges repeated the rating task one month later so that the stability of aesthetic judgments over time could also be investigated. I hypothesized that there would be significant differences between the judge groups in their item calibrations, measures of actors' abilities, and judge severities.
The data was analyzed with the FACETS program. Ray Adams devised a chi-square test for rating consistency, analogous to Hedges and Olkin (1985). The advantage of Ray's technique over ANOVA is that the calculation of this chi-square takes into consideration not only each calibration but also its standard error.
To my surprise, I found that the three groups shared a common understanding of nearly all the items and employed those items consistently when judging performances. There were only four items which had significantly different calibrations across the three groups. Additionally, the items performed in a stable manner for each of the groups across rating occasions. Buffs and novices used the rating criteria in the same way as experts when those criteria were explicit and couched in understandable language.
Yet there were also some noticeable differences between the three groups. First, experts were the most severe while novices were the most lenient. Second, the groups rated certain performances differently. Experts and buffs gave three actors significantly lower measures than novices did. Those three actors portrayed characters in mourning, and their characterizations were emotionally charged. Novices seemed to base their judgments upon a single criterion - the actor's ability to display intense emotion - and were unaware of the technical shortcomings of the performance. By contrast, experts and buffs seemed to view an actor from a number of perspectives and were not overwhelmed by the emotionalism displayed. Third, experts were better able to replicate their ratings one month later than buffs and novices. All three groups showed some change across time, but the amount of change for buffs was nearly twice that for experts, while the amount of change for novices was nearly twice that again.
This study breaks new ground by examining aesthetic judgment in the performing arts. It is a step towards the construction of an objective measurement system which drama teachers can employ to assess student growth in acting ability. Through the behavior of an intermediate group of judges, the theater buffs, it gives insight into the transition from novice to expert judge.
Hedges LV, Olkin I 1985 Statistical methods for meta-analysis. New York:Academic Press
Myford CM 1989 The nature of expertise in aesthetic judgment. Ph.D. dissertation, Univ. of Chicago. Dissertation Abstracts International, 50, 3562A
Rasch Measures Hamlet, C Myford Rasch Measurement Transactions, 1990, 4:2 p.105
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Apr. 21 - 22, 2025, Mon.-Tue. | International Objective Measurement Workshop (IOMW) - Boulder, CO, www.iomw.net |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Feb. - June, 2025 | On-line course: Introduction to Classical Test and Rasch Measurement Theories (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
Feb. - June, 2025 | On-line course: Advanced Course in Rasch Measurement Theory (D. Andrich, I. Marais, RUMM2030), University of Western Australia |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt42c.htm
Website: www.rasch.org/rmt/contents.htm