April 2008
The reliability of medically related oral certification examinations is critical because pass or fail decisions are made. Over the past 20 years, research and practice have demonstrated that the reliability of oral examinations is influenced by the structure of the examination and use of the multi-facet model to produce accurate candidate results.
Surintorn Suanthong, Ph.D.
Manager, Test Analysis and Research

Reliability of Oral Certification Examinations

Oral certification examinations are often noted for their validity rather than their reliability.  However, if pass or fail decisions are being made using oral certification examinations, then the pass or fail decisions should be as accurate as possible. 


The structure of the oral certification examination is critical to insure reliability.  It is important to structure the examination scoring so that the examiners record as much information as possible about the performance of the candidates.  When examiners give analytic ratings independently for standardized clinical tasks within cases, more detailed information about the candidate's performance is recorded.  However, if only summative holistic ratings are recorded, examiners mentally combine information about the candidate's performance and enter one holistic rating which provides little scoring information.  When analytic scores are used, there is enough information to calculate candidate means, standard deviations, and measurement errors, as well as, candidate separation reliability ((SD2 - SE2)/SD2) estimates that document the accuracy of the measured differences among candidate performance. This reliability calculation requires that multiple ratings be awarded to each candidate.  Most oral certification examinations use a rating scale to measure candidate performance. 


The table below shows the structure of several medically related oral certification examinations and the candidate separation reliability.  These oral certification examinations include standardized cases and require the examiner to give analytic ratings for standardized tasks within each case.  Tasks such as diagnosis, treatment, outcome, or ethics may be rated. The content of the cases covers pertinent subjects for the specialty. Depending on the number of examiners, cases and tasks, a great deal of information can be collected about the candidate's performance. Oral exams are never completely without subjectivity; however; analytic scoring and the use of the multi-facet analysis model provides enough information about the candidate, so that there is measurable accuracy with regard to candidate outcomes.


Oral Certification Exam

Number of cases per candidate

Number of tasks within cases

Number of examiners per case

Total number of ratings given to a candidate

Candidate-separation reliability

Exam 1






Exam 3






Exam 3


7 or 9




Exam 4






Exam 5






Measurement Research Associates, Inc.
505 North Lake Shore Dr., Suite 1304
Chicago, IL  60611
Phone: (312) 822-9648     Fax: (312) 822-9650

Please help with Standard Dataset 4: Andrich Rating Scale Model

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from welcomes your comments:
Please email inquiries about Rasch books to books \at/

Your email address (if you want us to reply):


FORUMRasch Measurement Forum to discuss any Rasch-related topic

Coming Rasch-related Events
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago,
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Oct. 25-27, 2017, Wed.-Fri. In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY,
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src=""></script>