Paired Comparisons for Measuring Team Performance

Two years of experience measuring the performance of NCAA College basketball and football teams suggest the following recipe:

1. Obtain a list of all teams of interest. Assign each one a unique team number.

2. Include two additional teams, a "best" team and a "worst" team. Assign each of these a team number.

3. Give every team two wins at a neutral site against the "worst" team, and two losses at a neutral site against the "best" team (see data format in 7 below).

4. Anchor the "best" team at 100 units, and the "worst" team at 0 units.

5. Choose a reasonable measurement scaling, e.g., 10 units per logit. This can be adjusted a few weeks into the season by performing a measurement analysis without the "best" and "worst" teams. Choose a logit-to-unit conversion that gives the listed teams a performance range of a little less than 100 units.

6. Choose a reasonable pre-season ranking, e.g., last season's final ranking or expert opinion. Assign a reasonable weighting to the pre-season ranking, which will diminish as the season progresses, to disappear at mid-season. For the NCAA, the ranking of teams is 1 to 300, with an initial weighting such that 20 ranks = 1 win. This is entered into the data with a team number, a dummy "venue", and the rank-order of the team last season.

7. When the season commences, obtain results daily (for NCAA basketball) or weekly (for NCAA football). These results must show, at least, "win, lose or draw" and "home, away or neutral" venue. For football, all games were "home or away" except most Bowl games. For basketball, it was often not easy to immediately identify which games were played at neutral venues. If in doubt, assume the game was played at the home of the winner until more information becomes available.

8. Ignore games between listed and unlisted teams. These are often exhibitions and invitationals with idiosyncratic results.

9. Each game is recorded conveniently, e.g., for Facets:
the home (or a neutral) team number,
the other team,
the venue for the first team: home, neutral,
the result for the first team:
when draws are possible: win (2), draw (1), loss (0),
otherwise: win(1), loss(0).

10. When a team wins for the first time, drop one of the two wins against the "worst" team. (The other win is permanent.) When it loses for the first time drop one of the losses against the "best" team. (The other loss is permanent.)

11. Set up the Rasch measurement computer program analysis.
e.g., for Facets, the measurement model would be:
Facets = 3 ; 3 Facets in the data
Entered=1,1,2 ; Facet 1, the teams, appears twice. Facet 2, the home team advantage, once.
Positive=1,2 ; add home advantage to home team
Model=?,-?,?,R ; team 1 plays against team 2 with measure adjusted by location.

12. Produce the measures.
The team measures are their "away/neutral" measures. An additional "home team advantage" is also computed.

13. Predict future results.
If home team measure + home team advantage >= away team measure, predict a home team win, otherwise a home team loss. This can be expanded to predict point-spread. Plot the measure differential against observed score differences to obtain the measure-difference to point-spread conversion.

In practice, this approach predicted 90% of all won-loss results correctly, which is a better prediction than the home-team-win rate of 80%, but not quite as good as the professional prognosticators success rate of 92%.

John Michael Linacre

This was done with a sequence of computer programs.

Program 1: each day the new results were downloaded from a sports website, reformatted into Facets data format, and written into a new file, "more.txt". There were always typographical errors and and other mistakes at the sports website, so "more.txt" was checked and edited by hand.

Program 2: when "more.txt" is correct, add it to the cumulative flat file of results in Facets data format: "results.txt"

Program 3: scans "results,txt" counting how many times each team had won and lost. It creates another file in Facets data format, "dummy.txt", containing a dummy winning and losing data record for each team, plus another dummy win data record for a team that had not yet won, and another dummy losing data record for a team that had not yet lost.

Program 4: Facets executes using a standard Facets specification file containing all the teams and the two dummy teams. It includes this line: data = results.txt + dummy.txt

Program 5: reformat the Facets output tables into webpages for display.

Paired Comparisons for Measuring Team Performance. Linacre J.M. … Rasch Measurement Transactions, 2001, 15:1 p.812


Please help with Standard Dataset 4: Andrich Rating Scale Model



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:

Your email address (if you want us to reply):

 

ForumRasch Measurement Forum to discuss any Rasch-related topic

Go to Top of Page
Go to index of all Rasch Measurement Transactions
AERA members: Join the Rasch Measurement SIG and receive the printed version of RMT
Some back issues of RMT are available as bound volumes
Subscribe to Journal of Applied Measurement

Go to Institute for Objective Measurement Home Page. The Rasch Measurement SIG (AERA) thanks the Institute for Objective Measurement for inviting the publication of Rasch Measurement Transactions on the Institute's website, www.rasch.org.

Coming Rasch-related Events
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Oct. 25-27, 2017, Wed.-Fri. In-person workshop: Applying the Rasch Model hands-on introductory workshop, Melbourne, Australia (T. Bond, B&FSteps), Announcement
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY, www.aera.net
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src="http://www.rasch.org/events.txt"></script>

 

The URL of this page is www.rasch.org/rmt/rmt151w.htm

Website: www.rasch.org/rmt/contents.htm