On a Streak: Part 2 – Exploiting Bias
Link to: Part 1
In Part 1 we looked at a widespread cognitive bias that causes humans to overestimate teams on a winning streak, and underestimate teams on a losing streak. In reality, streaks in AFL and other sports are best explained as statistical coincidences arising from random sequences of independent events. Psychologists and sports scientists have known about this for decades (ever since Gilovich et al. 1985 ), but mainstream media still seems to go on spreading misconceptions about streaks. Hang on, people bet on sport…
Betting companies (bookmakers) adjust the game betting odds based on demand for bets on each team, to manage their liabilities such that they payout the same total amount regardless of which team wins. Thus, it is ultimately the gamblers who determine the odds on the betting market, and the betting companies make a profit regardless of which team wins. Although the mechanics of betting markets are different from traditional stock markets, there are some similarities: it is the investors (gamblers), not the stock exchange operators (betting companies), that ultimately determine the price (odds) of stocks (teams).
Regardless of your views on gambling, I think the natural question at this point is: do people’s biases about streaks affect the betting odds? And more specifically, can this bias be exploited for profit? If the theory is correct, then the strategy should be simple: bet on the team with the losing streak.
I’m not the first person to ask this question. Just 4 years after Gilovich et al.’s original 1985 paper presenting this bias (originally for basketball, but the effect generalises to other sports), behavioral economist Colin Camerer (1989)  studied whether the bias affected the basketball betting market. Camerer did find a bias in the betting market odds; the odds tend to underestimate the probability of a team winning after a losing streak. However, this bias was too small to be profitably exploited, once bookmaker profit margins are accounted for.
Brailsford et al. (1995)  examined spread betting in Australian Rugby League (ARL). In spread betting, rather than having probabilities of each team winning, gamblers bet on whether the points difference will be more or less than the predicted “spread”. The spread is set so that each team has a 50% chance of doing better than the spread. This is convenient for analysis, because it makes the criteria for identifying a bias clear: find a strategy that can predict which side of the spread the point difference lands on with better than 50% accuracy.
Their approach used a probit model to predict whether the market would over-estimate an Australian Rugby Team (a probit model serves a similar purpose to logistic regression: it predicts a binary win/lose outcome based upon a linear combination of continuous inputs). Amongst other features in their model, Brailsford et al. used the number of times the home team had outperformed the market spread in their last 4 games. I’m going to call this feature “past luck”, because it measures whether the team did better than the expected spread rather than measuring the performance of the team. They found a statistically significant negative coefficient for the past luck feature in their prediction of market forecast error. That is, when the home team has been lucky in the past, it is likely to underperform market expectations (the market overestimates them), and/or when the home team has been unlucky in the past, it is likely to exceed market expectations (the market underestimates them).
Brailsford et al. also analysed a form of betting in Australian Football League (AFL), where the aim is to predict the number of points the team will win by, categorised into score bins (e.g. home win by 13-24 points). Their model predicted the score bin with better accuracy than random, indicating a market bias. Even once bookmaker profit margins were accounted for, they were still able to generate positive returns from simulated bets on their prediction. However, a random betting strategy would have also generated this same amount (or larger) 13% of the time. Because their model combined multiple parameters to predict the score bin, it is unclear which of the features were responsible for the underlying cause of market bias.
Brailsford et al.’s paper was published back in 1995. It’s possible that, even though the bias existed back then, the market could have adjusted to take this into account by now. Do market biases still exist? In my next post, I’ll take a look at recent AFL data to see if such a bias can still be identified.
Link to: Part 1
- Gilovich, T., Vallone, R., & Tversky, A. (1985). The hot hand in basketball: On the misperception of random sequences. Cognitive Psychology, 17(3), 295–314. https://doi.org/10.1016/0010-0285(85)90010-6
- Camerer, C. F. (1989). Does the Basketball Market Believe in the ‘Hot Hand,’? The American Economic Review, 79(5), 1257–1261. http://www.jstor.org/stable/1831452
- Brailsford, T. J., Gray, P. K., Easton, S. A., & Gray, S. F. (1995). The Efficiency of Australian Football Betting Markets. Australian Journal of Management, 20(2), 167–195. https://doi.org/10.1177/031289629502000204
Header image courtesy of Rafael Matsunaga.
Thanks to Nicola Pastorello and Maria Mitrevska for proofreading and providing suggestions.