Can we predict when to sack a football manager?

2 August 2019

Can we predict when to sack a football manager?

And why Steve Bruce isn't so bad after all...

With a week to go until the first games of the new football season get underway, already the media are using betting odds to speculate on who will be first manager to be sacked, with Frank Lampard at Chelsea and Ole Gunnar Solskjaer at Manchester United being the bookies’ favourites. The fans don’t rate the chances of Steve Bruce at Newcastle either, with some threatening to boycott the club before he even started.

The FT argued last year that economists try to use their models to predict football events so they can prove they are actually useful - a bit like the Ancient Greek philosopher Thales of Miletus, described by Aristotle, who created the world’s first derivative contracts in olive oil presses in order to prove that philosophers were not worthless.

With this in mind, we also developed a model to see if we could predict when a football manager was no longer being effective, and in common parlance, should be sacked. But does sacking a manager even work?

Over the last 10 years we have not found one example of a sacking leading to a club winning the league within the same season, although some teams have won cup competitions after replacing a manager – most prominently, Roberto di Matteo winning the Champions League for Chelsea. Changing manager at the top of the table can improve the team’s final position but not really significantly.

The real work of sacking is all at the mid-table survival or relegation zone, where even multiple sackings in a single season can be shown to have led to a club surviving for another season in the Premier League. To summarise, from 2009-2018, there have been around 97 managerial changes, and only 17 of these sacking teams were relegated, from a possible 30 relegation places.

So what about our model? A straightforward way of looking for good performance is to use points achieved against total wage spend per club. But this doesn’t fully take into account the differing resources that clubs have access to. So there is a lot of justification for trying to build a model that can look at manager performance game by game and using the actual resources of each club individually, and other factors that might affect performance but be beyond the manager’s control such as injuries or suspensions among key players. This allows us to adjust for expected against achieved results.

We can then try to measure the actual manager’s performance against a simulation based on what would be expected from the model given their available resources, to identify the statistically most likely outcomes. This means we can rank managers based on the difference between their actual levels of performance, measured by the average number of points per game compared with the model-predicted level of performance. To do this we therefore collect data game by game on injuries, suspensions, total wages, net transfer spend, and extra games played by the teams. But how far does each of these elements contribute to explaining the points per game gained by each manager?

  • Higher wage spend does equal better performance
  • Lack of players (through suspension or injury) equals worse performance
  • Extra games played in cups makes no difference
  • Net transfer spend makes no difference

We had previously populated our model from 2004/5 to 2007/8. It ran well and the results it generated gave us the confidence to predict within the first nine games whether a manager was working out. We have now updated it to the end of 2016/17 season. Over the extended period the majority of managers are now performing either above or at expectations on our measures and none of the managers in the new sample are under performing. In detail:

  • We now have a sample of 94 managers over the period taking into account 7,144 games
  • 34 (35%) are ranked as performing above expectations – including Steve Bruce
  • 53 (57%) are ranked as performing at expectation – this includes having no manager at all
  • Seven (8%) are ranked as performing below expectation

Among the managers performing above expectation we have what we would classify as ‘superstar’ managers. The model has these managers performing at the very top of the distribution and it is highly unlikely that any other manager could have performed better with the resources that are available to that club. This even includes Sam Allardyce and, to comfort Newcastle fans, Steve Bruce.

We think our model demonstrates that it is possible to measure football manager performance. This model could be used by football club owners as a sense checker when considering change – for instance it would inform whether their current poor run of form is actually what they should be expecting considering the resource the manager has control over. We have found superstar managers who mostly over-perform in the short term, and also managers who have very long careers, when the model would suggest they are undeserving. When trying to avoid relegation it pays to sack and sack multiple times. Finally, we find that success can be bought at the top end of the league by changing managers (with effects seen in subsequent seasons) but this costs a lot of money in compensation payments.

If we are worried about the sack race given the additional pressure that it places upon managers and the uncertainty generated for the club and its fans, then perhaps we could also bring in a transfer window for managerial change!

(Additional co-author: Dr Tom Markham)