The VF-14 is a self-report of 14 vision-dependent activities, designed to measure the need for, and outcomes of, cataract surgery (Steinberg et al., 1994). Each item is rated on a 1-5 scale (1=unable, 5=no difficulty). The developers reported a Cronbach's alpha of .85. In our study of 53 patients, we found these original 14 items produced a reliability coefficient of .89.
Recently Uusitalo et al. (1999) proposed reducing the VF14 to seven items to reduce administration time. The Figure shows a map of the 14 items of the VF-14 in hierarchical order and the distribution of our patients. The 7 items chosen by Uusitalo (1999) are indicated with a `*'. The Spearman-Brown prophecy formula predicts a reliability of .82 for their test on our sample, which is what we obtained (see Table).
Patients in our sample, as in that reported by Steinberg (1994), have very good visual functioning. Many of the items are not targeted to the visual function of the clients. When a set of items is not matched to the sample, reliability is degraded. When patients fail all or most items (i.e., the test is too hard, or when patients pass all or most items (i.e., the test is too easy), "it is as if one had shortened the test, since all differentiation is based on just a few items that some can do and some cannot" (Thorndike & Hagen, 1977, p.89.)
For a different subset of seven VF-14 items, better targeted to the visual abilities of the patients in the sample (indicated with a `#'), we obtained an higher reliability .86. In contrast, the seven items most off-target from patient abilities (indicated with a `~') yielded a coefficient of .61 - substantially below the predicted .82.
The Figure also shows that some of the rating scale categories were largely irrelevant to the functioning of our patients. For our best-targeted 7 items we tried
combining categories 2 (great deal of difficulty) and 3 (moderate difficulty) in the analysis. The Spearman-Brown prophecy predicted a reduced reliability of .76. But it remained .86. The extra off-target category was producing as much noise as information!
How then, can one coherently determine the measurement capability of a test?
Wright's separation index is a better guide to decision-making than a reliability coefficient. How many distinguishable strata are needed for clinical purposes with our sample? A reliability of .8 gives a separation of 2, so that two strata can be usefully distinguished. We achieved this with a 7 items and a shorter scale.
Note: in RUMM2020 documentation, the "Separation Index" is the Rasch reliability.
A reliability of .9 gives a separation of 3, and so 3 strata. We came close, but no study achieved this. The maximum separation in the last line of the Table is an estimate of the highest separation each test can produce with any sample.
Trudy Mallinson
Joan Stelmack
Steinberg EP, Tielsch JM, et al. (1994). The VF-14. An index of functional impairment in patients with cataract. Archives of Ophthalmology, 112(5), 630-638.
Thorndike RL, Hagen EP. (1977). Measurement and evaluation in psychology and education (4th ed.). New York, NY: Wiley.
Uusitalo RJ, Brans T, et al. (1999). Evaluating cataract surgery gains by assessing patients' quality of life using the VF-7. Journal of Cataract & Refractive Surgery, 25(7), 989-994.
Number of Items | 14 Steinberg |
7 Uusitalo |
7 on target |
7 off target |
7 on target |
Number of Rating Scale Categories | 5 | 5 | 5 | 5 | 4 collapsed |
Predicted Reliability | (.89) | .82 | .82 | .82 | .76 |
Obtained Reliability | .89 | .82 | .86 | .61 | .86 |
Wright's Separation | 2.92 | 2.01 | 2.38 | 1.26 | 2.35 |
Max. Separation (Estimated) | 6 | 5 | 5 | 4 | 4 |
Going beyond Unreliable Reliabilities. Mallinson T., Stelmack J. Rasch Measurement Transactions, 2001, 14:4 p.787-8
Rasch Publications | ||||
---|---|---|---|---|
Rasch Measurement Transactions (free, online) | Rasch Measurement research papers (free, online) | Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch | Applying the Rasch Model 3rd. Ed., Bond & Fox | Best Test Design, Wright & Stone |
Rating Scale Analysis, Wright & Masters | Introduction to Rasch Measurement, E. Smith & R. Smith | Introduction to Many-Facet Rasch Measurement, Thomas Eckes | Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. | Statistical Analyses for Language Testers, Rita Green |
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar | Journal of Applied Measurement | Rasch models for measurement, David Andrich | Constructing Measures, Mark Wilson | Rasch Analysis in the Human Sciences, Boone, Stave, Yale |
in Spanish: | Análisis de Rasch para todos, Agustín Tristán | Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez |
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Aug. 11 - Sept. 8, 2023, Fri.-Fri. | On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com |
Aug. 29 - 30, 2023, Tue.-Wed. | Pacific Rim Objective Measurement Society (PROMS), World Sports University, Macau, SAR, China https://thewsu.org/en/proms-2023 |
Oct. 6 - Nov. 3, 2023, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Facets), www.statistics.com |
June 12 - 14, 2024, Wed.-Fri. | 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024 |
The URL of this page is www.rasch.org/rmt/rmt144n.htm
Website: www.rasch.org/rmt/contents.htm