Test Length and Test Reliability for Multiple Choice Examinations
|
Some believe that longer multiple choice tests tend to be
more reliable because more items automatically reduce the error of measurement. Indeed, a sufficient number of items must be
included to cover the content areas tested; however, there are other factors
that contribute to how efficiently a test measures and separates candidate
ability.
The indication of reliability is candidate separation reliability,
calculated as [SD2 - SE2/ SD2] using Rasch
logit measures. This reliability index is appropriate
because the goal of any certification examination is to distinguish between
those candidates who are worthy of passing and those who are not. The better the test items distinguish among
candidate abilities, the less measurement error there is in the examination and
the higher the candidate separation reliability, regardless of the number of
items. For example, deleting poorly
performing items makes the test shorter, but also increases the accuracy of
measurement because the items producing the most error are eliminated from the
examination.
In this simple study, we looked at test length compared to
test reliability. Six different exams were included. The table shows the tests
in reliability order along with the number of items in each test. The number of items and reliability do not
appear to be related. A test of 211 items had a reliability of .93 while another
test of 233 items had a reliability of .78.
It appears that the quality of the items is as important, or even more
important, than the absolute number of items on the test. Of course, a
reasonable number of items must be included to insure accurate measurement and
content coverage.
Table of Test Length and Reliability
Exam
|
N of items
|
Reliability
|
Exam 1
|
211
|
.93
|
Exam 2
|
133
|
.91
|
Exam 3
|
193
|
.90
|
Exam 4
|
129
|
.86
|
Exam 5
|
236
|
.85
|
Exam 6
|
233
|
.78
|
|
|