Two years of experience measuring the performance of NCAA College basketball and football teams suggest the following recipe:
1. Obtain a list of all teams of interest. Assign each one a unique team number.
2. Include two additional teams, a "best" team and a "worst" team. Assign each of these a team number.
3. Give every team two wins at a neutral site against the "worst" team, and two losses at a neutral site against the "best" team (see data format in 7 below).
4. Anchor the "best" team at 100 units, and the "worst" team at 0 units.
5. Choose a reasonable measurement scaling, e.g., 10 units per logit. This can be adjusted a few weeks into the season by performing a measurement analysis without the "best" and "worst" teams. Choose a logit-to-unit conversion that gives the listed teams a performance range of a little less than 100 units.
6. Choose a reasonable pre-season ranking, e.g., last season's final ranking or expert opinion. Assign a reasonable weighting to the pre-season ranking, which will diminish as the season progresses, to disappear at mid-season. For the NCAA, the ranking of teams is 1 to 300, with an initial weighting such that 20 ranks = 1 win. This is entered into the data with a team number, a dummy "venue", and the rank-order of the team last season.
7. When the season commences, obtain results daily (for NCAA basketball) or weekly (for NCAA football). These results must show, at least, "win, lose or draw" and "home, away or neutral" venue. For football, all games were "home or away" except most Bowl games. For basketball, it was often not easy to immediately identify which games were played at neutral venues. If in doubt, assume the game was played at the home of the winner until more information becomes available.
8. Ignore games between listed and unlisted teams. These are often exhibitions and invitationals with idiosyncratic results.
9. Each game is recorded conveniently, e.g., for Facets:
the home (or a neutral) team number,
the other team,
the venue for the first team: home, neutral,
the result for the first team:
when draws are possible: win (2), draw (1), loss (0),
otherwise: win(1), loss(0).
10. When a team wins for the first time, drop one of the two wins against the "worst" team. (The other win is permanent.) When it loses for the first time drop one of the losses against the "best" team. (The other loss is permanent.)
11. Set up the Rasch measurement computer program analysis.
e.g., for Facets, the measurement model would be:
Facets = 3 ; 3 Facets in the data
Entered=1,1,2 ; Facet 1, the teams, appears twice. Facet 2, the home team advantage, once.
Positive=1,2 ; add home advantage to home team
Model=?,-?,?,R ; team 1 plays against team 2 with measure adjusted by location.
12. Produce the measures.
The team measures are their "away/neutral" measures. An additional "home team advantage" is also computed.
13. Predict future results.
If home team measure + home team advantage >= away team measure, predict a home team win, otherwise a home team loss. This can be expanded to predict point-spread. Plot the measure differential against observed score differences to obtain the measure-difference to point-spread conversion.
In practice, this approach predicted 90% of all won-loss results correctly, which is a better prediction than the home-team-win rate of 80%, but not quite as good as the professional prognosticators success rate of 92%.
John Michael Linacre
|This was done with a sequence of computer programs.|
Program 1: each day the new results were downloaded from a sports website, reformatted into Facets data format, and written into a new file, "more.txt". There were always typographical errors and and other mistakes at the sports website, so "more.txt" was checked and edited by hand.
Program 2: when "more.txt" is correct, add it to the cumulative flat file of results in Facets data format: "results.txt"
Program 3: scans "results,txt" counting how many times each team had won and lost. It creates another file in Facets data format, "dummy.txt", containing a dummy winning and losing data record for each team, plus another dummy win data record for a team that had not yet won, and another dummy losing data record for a team that had not yet lost.
Program 4: Facets executes using a standard Facets specification file containing all the teams and the two dummy teams. It includes this line: data = results.txt + dummy.txt
Program 5: reformat the Facets output tables into webpages for display.
Paired Comparisons for Measuring Team Performance. Linacre J.M. Rasch Measurement Transactions, 2001, 15:1 p.812
|Rasch Measurement Transactions (free, online)||Rasch Measurement research papers (free, online)||Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch||Applying the Rasch Model 3rd. Ed., Bond & Fox||Best Test Design, Wright & Stone|
|Rating Scale Analysis, Wright & Masters||Introduction to Rasch Measurement, E. Smith & R. Smith||Introduction to Many-Facet Rasch Measurement, Thomas Eckes||Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr.||Statistical Analyses for Language Testers, Rita Green|
|Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar||Journal of Applied Measurement||Rasch models for measurement, David Andrich||Constructing Measures, Mark Wilson||Rasch Analysis in the Human Sciences, Boone, Stave, Yale|
|in Spanish:||Análisis de Rasch para todos, Agustín Tristán||Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez|
|Forum||Rasch Measurement Forum to discuss any Rasch-related topic|
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
|Coming Rasch-related Events|
|March 21, 2019, Thur.||13th annual meeting of the UK Rasch user group, Cambridge, UK, http://www.cambridgeassessment.org.uk/events/uk-rasch-user-group-2019|
|April 4 - 8, 2019, Thur.-Mon.||NCME annual meeting, Toronto, Canada,https://ncme.connectedcommunity.org/meetings/annual|
|April 5 - 9, 2019, Fri.-Tue.||AERA annual meeting, Toronto, Canada,www.aera.net/Events-Meetings/Annual-Meeting|
|April 12, 2019, Fri.||On-line course: Understanding Rasch Measurement Theory - Master's Level (G. Masters), https://www.acer.org/au/professional-learning/postgraduate/rasch|
|May 24 - June 21, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|May 22 - 30, 2019, Wed.-Thu.||Measuring and scale construction (with the Rasch Model), University of Manchester, England, https://www.cmist.manchester.ac.uk/study/short/intermediate/measurement-with-the-rasch-model/|
|June 17-19, 2019, Mon.-Wed.||In-person workshop, Melbourne, Australia: Applying the Rasch Model in the Human Sciences: Introduction to Rasch measurement (Trevor Bond, Winsteps), Announcement|
|June 20-21, 2019, Thurs.-Fri.||In-person workshop, Melbourne, Australia: Applying the Rasch Model in the Human Sciences: Advanced Rasch measurement with Facets (Trevor Bond, Facets), Announcement|
|June 28 - July 26, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|July 2-5, 2019, Tue.-Fri.||2019 International Measurement Confederation (IMEKO) Joint Symposium, St. Petersburg, Russia,https://imeko19-spb.org|
|July 11-12 & 15-19, 2019, Thu.-Fri.||A Course in Rasch Measurement Theory (D.Andrich), University of Western Australia, Perth, Australia, flyer - http://www.education.uwa.edu.au/ppl/courses|
|Aug 5 - 10, 2019, Mon.-Sat.||6th International Summer School "Applied Psychometrics in Psychology and Education", Institute of Education at HSE University Moscow, Russia.https://ioe.hse.ru/en/announcements/248134963.html|
|Aug. 9 - Sept. 6, 2019, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|August 25-30, 2019, Sun.-Fri.||Pacific Rim Objective Measurement Society (PROMS) 2019, Surabaya, Indonesia https://proms.promsociety.org/2019/|
|Oct. 11 - Nov. 8, 2019, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|Jan. 24 - Feb. 21, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|May 22 - June 19, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 26 - July 24, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
|Aug. 7 - Sept. 4, 2020, Fri.-Fri.||On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com|
|Oct. 9 - Nov. 6, 2020, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com|
|June 25 - July 23, 2021, Fri.-Fri.||On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com|
The URL of this page is www.rasch.org/rmt/rmt151w.htm