Two years of experience measuring the performance of NCAA College basketball and football teams suggest the following recipe:
1. Obtain a list of all teams of interest. Assign each one a unique team number.
2. Include two additional teams, a "best" team and a "worst" team. Assign each of these a team number.
3. Give every team two wins at a neutral site against the "worst" team, and two losses at a neutral site against the "best" team (see data format in 7 below).
4. Anchor the "best" team at 100 units, and the "worst" team at 0 units.
5. Choose a reasonable measurement scaling, e.g., 10 units per logit. This can be adjusted a few weeks into the season by performing a measurement analysis without the "best" and "worst" teams. Choose a logit-to-unit conversion that gives the listed teams a performance range of a little less than 100 units.
6. Choose a reasonable pre-season ranking, e.g., last season's final ranking or expert opinion. Assign a reasonable weighting to the pre-season ranking, which will diminish as the season progresses, to disappear at mid-season. For the NCAA, the ranking of teams is 1 to 300, with an initial weighting such that 20 ranks = 1 win. This is entered into the data with a team number, a dummy "venue", and the rank-order of the team last season.
7. When the season commences, obtain results daily (for NCAA basketball) or weekly (for NCAA football). These results must show, at least, "win, lose or draw" and "home, away or neutral" venue. For football, all games were "home or away" except most Bowl games. For basketball, it was often not easy to immediately identify which games were played at neutral venues. If in doubt, assume the game was played at the home of the winner until more information becomes available.
8. Ignore games between listed and unlisted teams. These are often exhibitions and invitationals with idiosyncratic results.
9. Each game is recorded conveniently, e.g., for Facets:
the home (or a neutral) team number,
the other team,
the venue for the first team: home, neutral,
the result for the first team:
when draws are possible: win (2), draw (1), loss (0),
otherwise: win(1), loss(0).
10. When a team wins for the first time, drop one of the two wins against the "worst" team. (The other win is permanent.) When it loses for the first time drop one of the losses against the "best" team. (The other loss is permanent.)
11. Set up the Rasch measurement computer program analysis.
e.g., for Facets, the measurement model would be:
Facets = 3 ; 3 Facets in the data
Entered=1,1,2 ; Facet 1, the teams, appears twice. Facet 2, the home team advantage, once.
Positive=1,2 ; add home advantage to home team
Model=?,-?,?,R ; team 1 plays against team 2 with measure adjusted by location.
12. Produce the measures.
The team measures are their "away/neutral" measures. An additional "home team advantage" is also
computed.
13. Predict future results.
If home team measure + home team advantage >= away team measure, predict a home team win, otherwise
a home team loss. This can be expanded to predict point-spread. Plot the measure differential against
observed score differences to obtain the measure-difference to point-spread conversion.
In practice, this approach predicted 90% of all won-loss results correctly, which is a better prediction than the home-team-win rate of 80%, but not quite as good as the professional prognosticators success rate of 92%.
John Michael Linacre
This was done with a sequence of computer programs. |
---|
Program 1: each day the new results were downloaded from a sports website, reformatted into Facets data format, and written into a new file, "more.txt". There were always typographical errors and and other mistakes at the sports website, so "more.txt" was checked and edited by hand. Program 2: when "more.txt" is correct, add it to the cumulative flat file of results in Facets data format: "results.txt" Program 3: scans "results,txt" counting how many times each team had won and lost. It creates another file in Facets data format, "dummy.txt", containing a dummy winning and losing data record for each team, plus another dummy win data record for a team that had not yet won, and another dummy losing data record for a team that had not yet lost. Program 4: Facets executes using a standard Facets specification file containing all the teams and the two dummy teams. It includes this line: data = results.txt + dummy.txt Program 5: reformat the Facets output tables into webpages for display. |
Paired Comparisons for Measuring Team Performance. Linacre J.M. Rasch Measurement Transactions, 2001, 15:1 p.812
Forum | Rasch Measurement Forum to discuss any Rasch-related topic |
Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement
Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.
Coming Rasch-related Events | |
---|---|
Oct. 4 - Nov. 8, 2024, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
Jan. 17 - Feb. 21, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
May 16 - June 20, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
June 20 - July 18, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com |
Oct. 3 - Nov. 7, 2025, Fri.-Fri. | On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com |
The URL of this page is www.rasch.org/rmt/rmt151w.htm
Website: www.rasch.org/rmt/contents.htm