In Rasch analysis, multiple significance tests predominate. Commonly, there is a significance test associated with every item, every person, every rating scale category, every differential-item-functioning (DIF) effect, and more.
1. Conceptualized as single significance tests.
Significance tests are usually reported as the probabilities of single tests. On seeing that the response string associated with item 23 has a very low probability of being generated in accord with a Rasch model, we are prone to say to ourselves, "The purpose of this experiment was to test a hypothesis regarding the fit of the response string for Item 23. Consequently, the single-test probability is the relevant one."
2. Conceptualized as multiple independent tests of the same process.
Consider a 100 item test with responses that accord with the Rasch model. Then the expectation is that 5 or so item response strings have a probability of p ≤.05 of according with the Rasch model. So how unlikely must a response string be for it to be significantly unexpected? A technique attributed to Carlo Bonferroni employs the following logic for testing "the universal null hypothesis":
α is the Type I error for a single test (incorrectly rejecting a true null hypothesis). This is .05 for a single test of p ≤ 0.05. So, when the data fit the model, the probability of a correct finding for one test is (1-α), and for two independent tests (1-α)², and for n tests, (1-α)n. Consequently the Type I error for n independent tests is 1-(1-&alpha)n. Thus, if we intended the Type I error for the multiple test to be α, then the level for each single test is α' = 1 - (1-&alpha)1/n approx. = α/n. So that for a finding of p ≤ .05 to be found for 100 items, then at least one item would need to be reported with p ≤ .0005 on a single item test for the hypothesis that "the entire set of items fits the Rasch model" to be rejected.
An obvious problem with adopting this technique routinely in Rasch work is that a set of items may be accepted that includes obviously bad items. For a finding of p ≤ .05 to be found for 100 items, then at least one item would need to be reported with p ≤ .0005 (t ≥ 3.4) on a single item test. This degree of misfit generally requires a sample size of about 1,000 to be observable. 20 items reported with .005 < p < .01 (2.8 > t > 2.6) would not be deemed as sufficient to reject the null hypothesis that the data fit the Rasch model. It can be seen that the Bonferroni logic considers Type I error, but ignores Type II error (incorrectly rejecting a true alternative hypothesis), especially for individual "bad" items.
So, when does Bonferroni correction work? Apparently in decision-making situations in which a production batch is to be accepted or rejected based on testing of the quality of a sample from the batch. T. V. Perneger (1998) What's wrong with Bonferroni adjustments? British Medical Journal, 316, 1236-1238.
3. Multiple tests conceptualized as accumulating individual tests.
Benjamini and Hochberg (1995) suggest that an incremental application of Bonferroni correction overcomes its drawbacks. Here is their procedure:
i) Perform the n single significance tests.
ii) Number them in ascending order by probability P(i) where i = 1,n in order.
iii) Identify k the largest value of i for which P(i) ≤ α*i/n
iv) Reject the null hypothesis for i = 1, k
In our example of a 100 item test with 20 bad items with .005 < p < .01, the threshold values for cut-off with α of p ≤ .05 would be: 0.0005 for the 1st item, .005 for the 10th item, .01 for the 20th item, .015 for the 30th item. So that k in our example would be at least 20 and perhaps more. All our bad items have been flagged for rejection.
There are other techniques for multiple significance tests. Please contact Rasch Measurement Transactions if you have found any to be useful.
Benjamini Y. & Hochberg Y. (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society B, 57,1, 289-300.
Fred Wolfe, Randy MacIntosh, Svend Kreiner, Rense Lange, Roger Graves, John Michael Linacre contributed to the Rasch Listserv discussion on this topic.
Wolfe F. et al. (2006) Multiple Significance Tests Rasch Measurement Transactions, 2006, 19:3 p. 1033-44
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Oct. 4 - Nov. 8, 2024, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt194g.htm
Website: www.rasch.org/rmt/contents.htm