MEASUREMENT RESEARCH ASSOCIATES
TEST INSIGHTS
December 2008
Greetings!
 
After having worked with many tests over the years, the impact of measurement error on test reliability has become extremely clear.  It is also my observation that the quality, rather than the number of items, is the key to reducing measurement error and improving test reliability.
 
Surintorn Suanthong, Ph.D.
Manager, Test Analysis and Research
 
Reliability for Multiple Choice Examinations

There are several ways to calculate the reliability of multiple choice examinations. Each formula means something slightly different with regard to the precision with which candidates are measured.
 
KR-20, Cronbach Alpha and Rasch Separation reliability are all estimates of
"true person variance / observed person variance" for the sample. KR-20 is based on split-halves, Cronbach Alpha is based on analysis of variance. These two are identical for complete, dichotomous tests, and are based on raw scores. Rasch Separation reliability is computed from the Rasch measures and their respective standard errors. The biggest practical difference is that KR-20 or Cronbach Alpha includes extreme items that are calculated to have small error variance and may increase reliability. For Rasch candidate separation reliability, extreme item measures are estimated to have large errors associated with the measure that are likely to lower the reliability.
 
Regardless of the formula used, the impact of the error of measurement on the variance of the candidates' scores is the key to understanding reliability. Generally, the larger the error of measurement is, the lower the reliability.  Measurement error is affected by 1) the number of items, 2) statistical performance of the items, and 3) quality, clarity, and relevance of the items. Therefore, the number of items on the test is not the sole factor that contributes to the reliability estimate. 
 
The table shows results from five different multiple choice examinations. The number of items on the exam, in part relates to the error of measurement. However, the key is the amount of standard deviation relative to the error of measurement. When the standard deviation is higher relative to the error of measurement, the estimated reliability is higher, but when the error of measurement is higher relative to the standard deviation as in Exam 5, the estimated reliability is lower. The goal is to reduce the error of measurement so that differences among the candidates are accurately measured. In the end, it all depends on the quality of the items, rather than the number of items on the exam.   

 

Exam

Number of Items

Error of Measurement*

Standard Deviation*

Reliability

Exam 1

211

.17

.65

.93

Exam 2

288

.14

.51

.92

Exam 3

171

.17

.56

.90

Exam 4

236

.18

.47

.85

Exam 5

235

.15

.31

.76

*presented in logits

 
Measurement Research Associates, Inc.
505 North Lake Shore Dr., Suite 1304
Chicago, IL  60611
Phone: (312) 822-9648     Fax: (312) 822-9650
 


Coming Rasch-related Events
Aug. 11 - Sept. 8, 2023, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Aug. 29 - 30, 2023, Tue.-Wed. Pacific Rim Objective Measurement Society (PROMS), World Sports University, Macau, SAR, China https://thewsu.org/en/proms-2023
Oct. 6 - Nov. 3, 2023, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Facets), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024