At the recent Jackson Hole conference, Andrew Haldane of the Bank of England again reminded the world’s financial policy makers of a central truth about the 2008 crisis. The principal measure of bank resilience prescribed for and by regulators around the world – the capital ratios calculated according to principles laid down by the Basel Committee on Banking Supervision – had no value whatever in predicting the probability that a bank would fail. But a simple measure of the bank’s leverage ratio, which anyone with a calculator could compute, did.
The world’s financial policy makers nodded in admiration of Mr Haldane’s analysis. Then they continued as before, congratulating the Basel Committee on its excellent work. To expect otherwise would be to miss the point of these conclaves. Their purpose is not to discover, far less to tackle, the root causes of instability in the global financial system. Their purpose is to give politicians and the public a sense that “something is being done”, while enabling banks and regulators to continue operating as close as possible to the way in which these activities were conducted before 2007. Few processes meet this requirement for irrelevant busyness as meetings of the Basel Committee.
But Mr Haldane’s analysis represents a fundamental challenge to this orthodoxy. The likely explanation of his discovery that more complex rules are worse is to be found in Goodhart’s law. This proposition was first set out in the 1970s by the economist Charles Goodhart, in the context of the implementation of monetary policy.
Prof Goodhart suggested that any measure adopted as a target loses the information content that appeared to make it relevant. People change their behaviour to meet the target. These responses change the relationship between the target – the measure of money supply, or the value at risk – and the objective that policy makers seek to influence: the availability of credit, or the risk exposure of a bank. The target becomes a bad measure of success in reaching the objective as soon as it is adopted as a target. That is why the risk-weighted measure of Basel, which was a regulatory target, proved to be less reliable than the leverage ratio, which was not.
Gwyn Bevan and Christopher Hood illustrated how Goodhart’s law operated when Britain’s National Health Service adopted targets as a performance measure. One target, for example, was that 90 per cent of emergency calls for an ambulance achieve a response within eight minutes. The people who implemented this aim congratulated themselves when ambulance services showed success in fulfilling this goal.
But analysis revealed that an astonishingly large number of calls were answered in a time between seven and a half and eight minutes. Perhaps the figures were cooked – there is often a latitude in recording the time at which a call is logged and the time at which effective response is provided. Perhaps dispatcher and crews prioritised calls where the anticipated response time was around the eight-minute deadline over those that required the ambulance to make a much longer
or shorter trip. We do not know. We do know that the target was largely met, and have no easy means of discovering whether the service was better or worse as a result.
And so with capital requirements. The additional complexity of risk weighting stimulated regulatory arbitrage – the creation of instruments that transfer assets from one risk category to another while preserving their essential economic characteristics. The categorisation encouraged banks to reverse-engineer products to meet the demands of rating agency models. The resulting complexity diminished the system’s resilience.
Complex measures of bank security failed, not because of specific flaws in the detail of their design, but because such failure is intrinsic to that style of regulation. Serious attempts to promote financial stability must focus on reform of structure and incentives, not on a vain attempt to prohibit undesirable behaviour generated by bad structures and misplaced incentives.