A construct is an underlying latent trait that cannot be directly observed and measured (e.g., a mental property). The goal of measurement, specifically in social science research, is to develop questionnaire or test items to assess those unobservable constructs indirectly. The objective is to have items that cover as much as possible of the construct's continuum to allow for use in collecting information about a wide range of person performance.
In order to correctly estimate a person's location on a construct, it is imperative to define that construct well (Wright & Stone, 1979). When items are developed, they are intended to cover the spectrum of the construct being defined. However, there are instances when this is not the case. The result is insufficient or redundant coverage. The two instances are referred to as: 1) construct deficiency (insufficient coverage) and 2) construct saturation (redundancy). Each has implications for item bank development; which, in turn, impacts development of computer-based and computer-adaptive tests.
An item bank is a comprehensive catalog of items for use in creating psychometrically sound fixed-length, brief form and/or adaptive tests. These items should span the various construct dimensions and function along their respective continua at various difficulty levels. "The idea is that the test user can select test items as required to make up a particular test" (Choppin, 1978). The flexibility provided by an item bank allows the researcher to utilize valid, reliable and well-validated items without being required to re-calibrate those items each time they are used. Items selected for future use can differ, thus allowing optimized use of individual items.
Construct Deficiency: Under-representation of content area
Construct deficiency, CD, represents "gaps" on the construct continuum. These "gaps" represent the points at which the construct is poorly defined by the items (Schulz, 1995). In this situation, the goal is to develop items which fill these "gaps" at the specified logit value. There are two specific types of CD of interest:
1) statistically meaningful construct deficiency (SMCD), and 2) clinically meaningful construct deficiency (CMCD).
SMCD is a flexible index assigned by the principal investigator and item-banking team. A distance of 0.30 to 0.50 logits is a recommendation for SMCD evidence. CMCD is conceptualized on two levels: 1) important content area is not covered, and 2) overall content area is not covered fully. If an item is deemed clinically meaningful, upon consensus, regardless of fit, it is kept in the bank.
Implications for Item Bank
The optimal goal of an item bank is to fully cover the spectrum of a construct, thus producing a reliable measure. When a construct is poorly defined, the implications for future use are: 1) floor and ceiling effects will impact those individuals whose ability levels fall outside of the item difficulty levels, thus providing inadequate information; and 2) individuals whose ability levels are at the location of a "gap" will be given items that poorly target their ability. Furthermore, there are two specific ramifications for a poorly defined construct: 1) impact on the development of computer-based tests, and 2) on the development of computer-adaptive tests.
Impact on Development of Computer-Based Tests
Construct deficiency can impact the results of a computer-based test because it reduces the amount of information obtained for each individual because the construct is poorly defined. This is problematic on two levels: 1) items are not targeted at the person's ability level, and 2) higher error estimates for the person's ability level, thus lowering precision and interpretability.
Impact on Development of Computer-Adaptive Tests
Construct deficiency impacts computer-adaptive tests in much the same way as it impacts computer-based tests. Maximum-information-based computer-adaptive tests specifically function to target the person at his/her ability level with items at the same level of difficulty. If there is not an item located at that person's ability level, the test is forced to move to an item further away, thus increasing the error of the ability estimate. Items are presented based on responses to the preceding item, therefore, it is necessary to fully define the construct along the continuum before attempting to produce this type of test. A bank of items limited by construct deficiency results in the inability to measure individuals along the entire ability continuum with high precision (Halkitis, 1996).
Setting up a computer adaptive test requires thresholds for item selection (i.e., logit range), and precision (i.e., stopping rules based on individual standard error). When a construct is poorly defined, the individual is forced to take more items in order to achieve a reliable estimate.
Construct Saturation: Over-representation of content area
Construct saturation is over-representation by similar items at a specific logit value. This is defined more fully as the point on the construct continuum where several items are measuring the same thing in almost the same way. Overall, the goal is to have all of the items measure the same construct. However, we want them to produce new information at each level of that continuum. "A useful item is "as similar as possible, but as different as possible" (Linacre, 2000)". An item bank may have many items at the same difficulty level. Over-representation occurs when some of those items are too similar and so are no longer independent. The redundancy incurred by administering two almost identical items slightly distorts the person ability measures, but does not impact the overall measures noticeably.
Implications for Item Banks
The implications of construct saturation in an item bank are more positive than negative. By incorporating items that measure the same thing on a construct, it is possible to extend the choices for item selection by the test developer. But overly similar items should be identified as alternatives when used in the construction of any particular test.
Impact on Development of Computer-Based Tests
The impact of construct saturation on a computer-based test is negative if more than one alternative item is included. Respondents may become frustrated when presented with several items that ask essentially the same thing. Further, statistical information is usually based on regarding the items as independent. It is difficult to make adjustments for non-independent items.
Impact on Development of Computer-Adaptive Tests Construct saturation on a computer-adaptive test is beneficial for the test developer because it allows different alternative items with similar logit values to be presented to different individuals as they proceed through the test. This overcomes the problem of "tracking", which occurs when all persons of similar ability are administered essentially the same test. Therefore, to avoid over-exposure of individual items and also "tracking", it is actually beneficial to have redundant alternative items.
Construct Coverage Protocol: Methods for Gap-Filling
In the presence of SMCDs and CMCDs, there are seven steps
recommended below as a possible solution:
Step 1: Identification of any clinically or statistically meaningful gaps or redundancies in the continuum. This requires labeling the gaps as statistical, clinical, or both, and identifying sets of alternative items.
Step 2: Determine the number of items needed to fill each gap (e.g., 5-10 items, depending on the gap size).
Step 3: Formulation of new items by a committee comprised of clinical and statistical experts.
Step 4: Review by oversight committee. Reasons for rejection of items recorded in hard copy.
Step 5: Testing of new and revised items with clinical collaborators and selected group of patients.
Step 6: Patient testing utilizing computer-based-testing procedures that incorporate old and new items.
Step 7: Calibration of new items along the anchored continuum of the previous items.
Stacie Hudgens, Kelly Dineen, Kimberly Webster, Jin-Shei Lai, David Cella on behalf of the CORE Item Banking Team
Choppin, B. H. (1978) Item Banking and the Monitoring of Achievement Research in Progress Series, I. NFER.
Halkitis P. N. (1996) CAT with a Limited Item Bank. RMT 9:4 p. 471.
Linacre, J.M. (2000) Redundant Items, Overfit and Measure Bias. RMT 14(3) p.755.
Schulz E. M. (1995) Construct deficiency? RMT 9(3), p. 447.
Assessing Statistically and Clinically Meaningful Construct Deficiency/Saturation: Recommended Criteria for Content Coverage and Item Writing, Stacie Hudgens, Kelly Dineen, Kimberly Webster, Jin-Shei Lai, David Cella on behalf of the CORE Item Banking Team, Rasch Measurement Transactions, 2004, 17:4 p.954-955
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|March 21, 2019, Thur.||13th annual meeting of the UK Rasch user group, Cambridge, UK, http://www.cambridgeassessment.org.uk/events/uk-rasch-user-group-2019|
|April 4 - 8, 2019, Thur.-Mon.||NCME annual meeting, Toronto, Canada,https://ncme.connectedcommunity.org/meetings/annual|
|April 5 - 9, 2019, Fri.-Tue.||AERA annual meeting, Toronto, Canada,www.aera.net/Events-Meetings/Annual-Meeting|
|April 12, 2019, Fri.||On-line course: Understanding Rasch Measurement Theory - Master's Level (G. Masters), https://www.acer.org/au/professional-learning/postgraduate/rasch|
|May 24 - June 21, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|May 22 - 30, 2019, Wed.-Thu.||Measuring and scale construction (with the Rasch Model), University of Manchester, England, https://www.cmist.manchester.ac.uk/study/short/intermediate/measurement-with-the-rasch-model/|
|June 4 - 7, 2019, Tue.-Fri.||In-Person Italian Rasch Analysis Workshop based on RUMM (Fabio La Porta and Serena Caselli; entirely in Italian). Prof David Andrich from Western Australia University will be hosted by the workshop. For enquiries and registration email to firstname.lastname@example.org|
|June 17-19, 2019, Mon.-Wed.||In-person workshop, Melbourne, Australia: Applying the Rasch Model in the Human Sciences: Introduction to Rasch measurement (Trevor Bond, Winsteps), Announcement|
|June 20-21, 2019, Thurs.-Fri.||In-person workshop, Melbourne, Australia: Applying the Rasch Model in the Human Sciences: Advanced Rasch measurement with Facets (Trevor Bond, Facets), Announcement|
|June 28 - July 26, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|July 2-5, 2019, Tue.-Fri.||2019 International Measurement Confederation (IMEKO) Joint Symposium, St. Petersburg, Russia,https://imeko19-spb.org|
|July 11-12 & 15-19, 2019, Thu.-Fri.||A Course in Rasch Measurement Theory (D.Andrich), University of Western Australia, Perth, Australia, flyer - http://www.education.uwa.edu.au/ppl/courses|
|Aug 5 - 10, 2019, Mon.-Sat.||6th International Summer School "Applied Psychometrics in Psychology and Education", Institute of Education at HSE University Moscow, Russia.https://ioe.hse.ru/en/announcements/248134963.html|
|Aug. 9 - Sept. 6, 2019, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|August 25-30, 2019, Sun.-Fri.||Pacific Rim Objective Measurement Society (PROMS) 2019, Surabaya, Indonesia https://proms.promsociety.org/2019/|
|Oct. 11 - Nov. 8, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Nov. 3 - Nov. 4, 2019, Sun.-Mon.||International Outcome Measurement Conference, Chicago, IL,http://jampress.org/iomc2019.htm|
|Jan. 24 - Feb. 21, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|May 22 - June 19, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 26 - July 24, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 7 - Sept. 4, 2020, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 9 - Nov. 6, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 25 - July 23, 2021, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
The URL of this page is www.rasch.org/rmt/rmt174d.htm