/* ]]> */
Nov 152013
 

The Battle of the Bots results have been updated through Week 10. Apparently we all had a lousy Week 10.

I reproduce the cumulative rankings below.

Entrant Total Cost
WinThatPool – Human 30.699
pudds – Bot (alg6) 31.768
Rufus_Reddit – Bot (Simple Modified Pythagoran) 32.011
pudds – Bot (alg7) 32.365
Philosotoaster – Human 32.501
Rufus_Reddit – Bot (Fancy Modified Pythagoran) 32.545
steve496 – Bot 32.564
Rufus_Reddit – Bot 32.924
Wallamaru – Human 32.946
Philosotoaster – Bot (Bot1) 32.982
Philosotoaster – Bot (Bot2) 33.138
k_Bomb – Human 34.947
jocloud31 – Human 35.094
BuckNewman – Human 35.602
redditcdnfanguy – Bot 36.399

See the results first hand on reddit.

See the results through Week 9 here.

As a reminder, this contest measures all predictions’ economic cost using a logistic function. The cost of a correct prediction is -log(1-WinProbability), and the cost of an incorrect prediction is -log(WinProbability). Battle penalizes you fairly and proportionately for both overconfidence and timidity. It gives a clearer depiction of the accuracy of anybody’s WinProbabilities, as opposed to a Confidence Pool, which forces you to score WinProbabilities ordinally. Your rankings for a Confidence Pool depend on all your WinProbabilities for that week’s slate of games, whereas in Battle your cost of being right or wrong on any individual game does not depend on the other games’ probabilities.

Consider the Week 11 predictions. In a Confidence Pool, for Week 11 an upset in the Minnesota@Seattle game hurts 15 times as much as an upset in the San Diego@Miami game. In Battle of the Bots, it hurts only about 2.4 times as much, reflecting the relative (predicted) chance of being wrong.

 

 Posted by on November 15, 2013 at 12:16 pm