Michael Gove’s thoughtful, even inspiring, Ditchley lecture confirmed his reputation as an unusually intellectual exception to the uninspiring norm of our politicians. But I jibbed rather when he suggested that civil servants of the future needed to be better versed in Monte Carlo methods and Bayesian statistics. As readers of my recent book with Mervyn King, Radical Uncertainty, will know, I fear that policymaking relies too much rather than too little on Monte Carlo methods and Bayesian statistics.

As Mr Gove recognised, not everyone leaves school – or a PhD programme – familiar with these concepts, so here is a crib sheet for those who want nevertheless to be part of the new civil service. Monte Carlo methods were originated by one of the great polymaths of the twentieth century, John von Neumann, in the course of his work on nuclear fission and fusion immediately after the Second World War. In a nuclear reaction, many different particles each move randomly, but with known probability distributions. Modelling these interactions was highly demanding mathematically and the researchers had the idea that the results might be simulated using the ENIAC computer, the world’s very first general purpose computer, which had just been completed at the University of Pennsylvania.

So the massive ENIAC machine was shipped 200 miles to the US military’s ballistics laboratory at Aberdeen in Maryland. And the rest, as they say, is history. The US successfully tested a hydrogen bomb in 1952. Significantly, progress was made on the underlying mathematics of joint probability distributions and one application was the ‘Gaussian copula’. This became notorious in 2009 as ‘the formula that killed Wall Street’ because it underpinned many of the risk models which so lamentably failed in the global financial crisis. Not as destructive as a thermonuclear device, perhaps, but still to be handled with care.

But today you do not have to be as clever as von Neumann, have access to an ENIAC computer, or know a copula from a cupola to undertake a Monte Carlo simulation – a laptop and a spreadsheet will do. You don’t know what inflation and interest rates will be in 2075, and nor do I, and nor does anyone else. But you can guess. And you can make a thousand different guesses, and plug the numbers into a simple formula, and your machine will tell your pension fund trustees what contributions are needed to ensure that your pension is safe. Believe it or not, that is what they do.

Like Monte Carlo methods, Bayesian statistics are also sometimes useful and also often prone to abuse. The origins of the technique are equally exotic but could hardly be more different. The Reverend Thomas Bayes was a presbyterian clergyman in Kent and after he died his friend and fellow minister Richard Price discovered among Bayes’ papers a mathematical result now known as Bayes’ theorem, which enables the calculation of conditional probabilities. If we agree to play a fair game of chance three times, then the chances that either of us comes out on top are equal. But if I win the first game then the probability that I will scoop the jackpot is ¾ and your chance has fallen to ¼.

The Reverend Bayes is not thought to have frequented the gaming tables. If the interests of Price are any guide, he was probably concerned to refute the teachings of the irreligious Scottish philosopher David Hume, who was exercised by the problem of induction – given that the sun has always risen each morning, how do we know if it will rise tomorrow? Price was especially anxious to use his friend’s work to refute Hume’s scepticism about the occurrence of miracles. Perhaps it is the need for miracles which led the present government to take an interest in the work of Reverend Bayes.

Without prejudice to the incidence of miracles, Bayes’ theorem is a useful tool in some problems – readers might refer back to the two envelope problem or the earlier posts on the Monty Hall problem (also discussed, with other similar problems, in Radical Uncertainty). Monty Hall is perhaps the most famously counter intuitive application of the theorem. And the style of Bayesian reasoning – adjusting your views in response to new information as it becomes available – is something we might properly expect of our politicians and civil servants, even if it is not what we always – or perhaps often – see.

But the methods of Bayesian statistics are properly applicable only when you know the properties of the underlying model which is generating the data. And often you don’t. Return again to the global financial crisis. Risk models generated estimates of probabilities which would have rendered what actually happened virtually impossible. But it did happen. Not because the statistical technique is flawed but because the risk modellers did not understand the processes which yielded the data to which they applied their methods. That is why the American Statistical Association has recently appealed for an end to the use of the phrase ‘statistically significant’. Good luck to that.

Mr Gove is right to look for more understanding in government of Monte Carlo methods and Bayesian statistics. But not because these methods are insufficiently applied, but because it is only if you understand a technique that you also understand its limitations. In the run-up to the 2007-8 financial crisis and the present Covid-19 pandemic, serious mistakes have been made by decision-makers who put more faith in techniques than was justified.