The design flaws that lead to financial explosions

329

Nuclear power and financial systems both have the capacity to blow up the world. Perhaps there are lessons from one for the other. Charles Perrow, the organisational sociologist, was prompted by the 1979 nuclear incident at Three Mile Island to investigate breakdowns in complex systems. The accident is still the most serious nuclear plant failure in a western economy.

The incident began with a minor defect in the secondary cooling system. Everyone knows nuclear power stations are potentially dangerous so there are many back-up mechanisms. These did not operate as intended because of several other unrelated failures. The end result was a hydrogen explosion that led to a minor release of radioactive material but whose consequences were limited by the containment building. Philadelphia was saved.

An inquiry concluded that everyone was at fault. The greatest opprobrium was reserved for the operators in the control room when the failure occurred. They were slow to understand the contradictory readings they received. The inquiry had the benefit of more months to acquire such understanding than the engineers had minutes. The panel made recommendations that, if implemented, would ensure a failure exactly like the one that had occurred could not happen again.

That is what such inquiries usually conclude. Of course they do. None of the components that failed should have failed, and there will always be things the people on the spot might have done that would have minimised the consequences.

As Professor Perrow explained, this is all beside the point. There will not be another accident like Three Mile Island but there will be other, different, accidents at other stations – as indeed there have been. He used the term “normal accidents” to describe the similar incidents that inevitably occur in all the complex systems he observes: marine transport, petrochemical plants, space exploration.

The fundamental problems lie in system design, not the components or the people who try to make these systems work. Two features render systems particularly prone to failure: interactive complexity, which means that everything depends on everything else; and tight coupling, which means that there is little slack to permit self-repair or recovery.

From time to time, I use UPS to send a parcel from Britain to my house in France. Through its online tracking system, I can follow the movements of the package. It is collected on Tuesday, and shipped to Paris overnight. On Wednesday it moves to Lyon and, during the early hours of the morning, is trucked to Nice. On Thursday, we receive a phone call at 8am from a UPS representative – who arrives with the package at lunchtime.

The UPS delivery system, although complex, is linear rather than interactive in its complexity, and loosely coupled. When a parcel failed to arrive, it was easy and quick to establish that the consignment had left Paris but not arrived in Nice and then to discover that a heavy fall of snow in central France had blocked the Autoroute du Soleil. When the drifts and stranded vehicles were cleared, the package reached Lyon two days later. The loosely coupled system could easily accommodate a delayed delivery.

An inquiry into this failure would, of course, blame everyone: France Météo for inadequate snow warnings, the traffic authorities for dilatory clearance and, especially, the truck driver for failing to take avoiding action. Few if any of the improvements to these procedures that would be proposed would be cost effective, and none would in reality eliminate the possibility of subsequent failures.

The lesson for financial services is that the attempt to design a system for zero failure is impractical. The crucial issues are those of system design. Shorter, simpler, linear chains of intermediation are needed, and loose coupling that gives every part of the system loss absorption capacity and resolution capability.

The direction of travel in the past two decades has been the opposite – the multiplication of interactive complexity through the explosion of trading between financial institutions, and ever tighter coupling as timescales are shortened and capital is used “more efficiently”. Finance needs to learn from engineers with experience of complex systems in the face of “normal accidents”.

Print Friendly, PDF & Email