Automotive "aftermarket" paint is used to repaint damaged cars. Customers must be assured that the paint matches their car. Spectroscopic analysis and expert judgment are used for good matching, but using judges is expensive. So The Sherwin-Williams Co. decided to investigate how successfully spectroscopic analysis predicts the customer perception of color matching. To do this, for each test color, the Delta E spectral index was compared with the Rasch color-match difference measure.
15 judges rated the differences between 67 test colors and the corresponding standard colors in 5 color groups: white, beige, blue, red, and green. Spectral data for each standard-to-test color-match difference were summarized in a Delta E index. The judges rated the goodness of the color match on a 4 point scale: 1. Obvious difference. 2. Noticeable difference. 3. Disguiseable difference. 4. No noticeable difference. From these ratings were constructed measures of standard-to-test color- match difference and measures of judge color-match perception sensitivity.
Difficulty of matching color-groups
There were consistent differences across color groups in the difficulty of matching the colors. White, beige, and red were easiest to match. Blue was intermediate. Green was hardest to match and hardest for judges to agree on. Though the finding for green is based only on two green shades and may be an artifact of the chosen standard and test colors, it challenges the common conviction that blues are hardest to match.
Color-group effects on judgments
Each judge was found to have a substantively different overall color-match sensitivity. This sensitivity must be calibrated and adjusted for in order to place all color-match differences on a common scale. But any considerable idiosyncratic change in a judge's color-match sensitivity across color groups would invalidate such an overall adjustment and limit the generality of any resulting color-match criteria. Consequently, each judge's overall sensitivity calibration was compared to that judge's sensitivity for each color-group. Few judge color-group sensitivity measures were significantly different from the judge's overall measure, indicating that color- match perception within judges is usefully stable across color groups. In practice, judge performance must be continuously monitored to insure that consistency is maintained.
Improving spectral criteria
Comparisons of the Delta E and judged color-match difference measures are in the Figures for four of the five color groups. The fifth group, green, is represented by only three data points, too few to make a picture. In each plot, the horizontal axis is the Delta E index of the similarity of the test color and the standard color, values to the left of 0.5 are generally thought to be acceptable. The vertical axis is the judged measure of the same difference.
Judge perception is intended to resemble customer perception, so the vertical placement of points is decisive. Above 1.5 logits, there is no perceptible difference between the colors. Since customer complaints are more expensive than blending a better matching color, optimal Delta E values would eliminate judged differences below the 1.5 logit line.
The plots show that the conventional 0.5 Delta E criterion for acceptable color- match is inadequate. Though perception measures and the Delta E index are correlated in all plots, 0.5 Delta E rejects effectively perfect perceptual matches for all four color-groups. Perceptually imperfect matches are accepted in two groups. For red, 0.5 Delta E rejected all matches except one, and so had little opportunity to yield false matches. Delta E is most satisfactory for white, and not grossly misleading for beige. It accords with the common belief that a blue match is hardest to detect.
It was concluded that 1) an improved spectroscopic technique is needed before expert judgment can be replaced, and 2), since expert judgment remains crucial, adjustment must be made for judge color-match sensitivity, and its variation monitored.
Thomas K. Rehfeldt
Measuring color-match perception. Rehfeldt TK. Rasch Measurement Transactions 1993 7:3 p.304
Measuring color-match perception. Rehfeldt TK. Rasch Measurement Transactions, 1993, 7:3 p.304
Please help with Standard Dataset 4: Andrich Rating Scale Model
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|March 31, 2017, Fri.||Conference: 11th UK Rasch Day, Warwick, UK, www.rasch.org.uk|
|April 2-3, 2017, Sun.-Mon.||Conference: Validity Evidence for Measurement in Mathematics Education (V-M2Ed), San Antonio, TX, Information|
|April 26-30, 2017, Wed.-Sun.||NCME, San Antonio, TX, www.ncme.org - April 29: Ben Wright book|
|April 27 - May 1, 2017, Thur.-Mon.||AERA, San Antonio, TX, www.aera.net|
|May 26 - June 23, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 30 - July 29, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|July 31 - Aug. 3, 2017, Mon.-Thurs.||Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br|
|Aug. 7-9, 2017, Mon-Wed.||In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia|
|Aug. 7-9, 2017, Mon-Wed.||PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/|
|Aug. 10, 2017, Thurs.||In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm|
|Aug. 11 - Sept. 8, 2017, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Aug. 18-21, 2017, Fri.-Mon.||IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org|
|Sept. 15-16, 2017, Fri.-Sat.||IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm|
|Oct. 13 - Nov. 10, 2017, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 5 - Feb. 2, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 10-16, 2018, Wed.-Tues.||In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement|
|Jan. 17-19, 2018, Wed.-Fri.||Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website|
|May 25 - June 22, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 29 - July 27, 2018, Fri.-Fri.|
|Aug. 10 - Sept. 7, 2018, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 12 - Nov. 9, 2018, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|The HTML to add "Coming Rasch-related Events" to your webpage is:|
The URL of this page is www.rasch.org/rmt/rmt73b.htm