Decision-making, John Kay’s way

2334

Successful decision-making is more limited in aspiration, more modest in its beliefs about its knowledge of the world, more responsive to the reactions of others, more sensitive to the complexity of the systems with which it engages. Complex goals are generally best achieved obliquely.

Pros Cons by AndySmithIn a letter to the English chemist Joseph Priestley, the 18th-century polymath Benjamin Franklin outlined a procedure for making decisions: “Divide half a sheet of paper by a line into two columns; writing over the one Pro, and over the other Con. Then, during three or four days’ consideration, I put down under the different heads short hints of the different motives, that at different times occur to me for or against the measure.

“When I have got them all together in one view, I endeavour to estimate the respective weights… I have found great advantage for this kind of equation, in what may be called moral or prudential algebra.”

Almost a century later, Charles Darwin would attempt to follow Franklin’s Rule. He set out the pros and cons of marriage in two opposing columns. A wife would provide “children, companionship, the charms of music and female chitchat”. She would be “an object to be beloved and played with”, although he did not seem to attach great weight to this benefit, conceding only that a wife was in this respect “better than a dog anyhow”. But Darwin also noted the disadvantages of the married state. The prospect of “being forced to visit relatives, and to bend in every trifle”; “The loss of freedom to go where one liked, the conversation of clever men at clubs.”

We snigger at the moral algebra of Franklin and Darwin – and so did they: below his assessment, Darwin scrawled “it is intolerable to think of spending one’s whole life, like a neuter bee, working, working – only picture to yourself a nice soft wife on a sofa.” He ends his notes “marry – marry – marry QED”. The following year, he wed Emma Wedgwood and the couple had 10 children.

Franklin himself knew that moral algebra was generally a rationalisation for a decision taken otherwise, setting out not just Franklin’s Rule, but Franklin’s Gambit – the process of finding a weighty and carefully analysed rationale for a decision that has already been made. Everyone who has ever worked in a large organisation has seen frequent examples.

I profited handsomely from it. I spent part of my career building models and constructing economic arguments for corporate clients. When I began, I thought I was helping them with their decision-making. But I came to realise that this was rarely true. The models, the arguments, were used to justify conclusions that had been arrived at earlier – sometimes internally, sometimes externally. I was a pawn in Franklin’s Gambit.

In my academic life, I taught the standard concepts of modern economic theory, based on efficient markets populated by maximising agents. Yet, as I saw more of successful businesses, I understood that they didn’t maximise anything. Such businesses were complex political organisms: they contained and were influenced by diverse individuals and groups with diverse goals, and the effective manager was someone who could mediate between these conflicting forces. If these companies talked about shareholder value – and increasingly they did – it was an instance of Franklin’s Gambit, a legitimising rhetoric rather than a real guide to action.

And the more shareholder value became a guide to action, the worse the outcome. On the board of the Halifax Building Society, I voted in 1995 for its conversion to a “plc”. We would allow the company to pursue the goal of maximising its value untrammelled by outmoded concepts of mutuality: in barely a decade, almost every last penny of that value was destroyed. In 1996, as my thoughts on this began to form, I went to the CBI annual conference and described how ICI, for decades Britain’s leading industrial company, had recently transformed its mission statement from “the responsible application of chemistry” to “creating value for shareholders”. The company’s share price peaked a few months later, to begin a remorseless decline that would lead to its disappearance as an independent company.

In the same talk, I described the similar shift at Marks and Spencer. Simon Marks had built an iconic business; in the 1990s, his successors raised operating margins by eroding the franchise with customers and squeezing suppliers. In 1998, margins and profits reached an all-time high, but, within months, the company’s sales and reputation fell off a cliff.

These unanticipated results reflected the profit-seeking paradox, well described in James Collins and Jerry Porras’s fine book Built to Last: the most profitable companies were not the most profit-oriented. Few companies have ever been as focused on profit as Lehman Brothers – or Bear Stearns, where the sign “Let’s make nothing but money” hung over the trading floor. Such businesses indeed made nothing but money: not products that were valued by customers, nor client loyalty, nor friends, and in the end lost more money than they had ever made. But the phenomenon of obliquity – that complex goals are rarely best achieved when approached directly – was not confined to the profit paradox. The happiest people were not those who pursued happiness, the wealthiest men were not the most materialistic: the great painting was not the most faithful representation.

Obliquity is in direct conflict with Franklin’s Rule, and some problems can be solved in the manner it dictates. Take Sudoku. The characteristics of Sudoku that make this possible are that there is a unique solution, and we know when we have found it; that even if we do not know what will happen, we know the range of possibilities and can attach probabilities to them; that interactions with other people and activities are limited and ­predictable; and hence that the number of outcomes is finite and can be analysed.

If the world were like Sudoku, all decision-making could be tackled in a similarly analytic way. But problems in business and political life rarely meet these conditions and human brains are attuned to real problems, not artificial ones. Sudoku is interesting only because people don’t solve problems as a computer would. The human strategy for Sudoku, as for life, does not power through masses of calculations, but iterates, adapts and retreats when attempted solutions prove less promising than they appear. This ­latter approach is often the more powerful.

Friedrich von Hayek, in his prescient account of the failure of economic planning, understood that order without design was central to social ­institutions: “Nobody has yet succeeded in deliberately arranging all the activities that go on in a complex society,” he wrote. “If anyone did ever succeed in fully organising such a society, it would no longer make use of many minds, but would be altogether dependent on one mind; it would certainly not be very complex but extremely primitive – and so would soon be the mind whose knowledge and will determined everything.”

. . .

It was, of course, Darwin – in his scientific life – who understood why adaptation is so often more powerful than direction. A reflex leads us to pull our hands away from a hot stove far more quickly than we could do if we had to calculate the effects. We don’t pull our hand away because it will damage tissue and bone – we pull it away because it hurts. We probably pull away our hand in anticipation even before it hurts. If we know the stove is very hot, we will not put our hand on it in the first place.

If we think about it, we recognise that we will suffer serious injury unless we pull our hand away, but we don’t actually have that thought, and that isn’t why we remove our hand. And yet it is the potential for tissue damage that leads us to pull away. Pain is an evolved capacity that enables us to avoid damage by forcing us to withdraw from threatening situations.

Some control over the pain reflex helps us. The source of pain may be beneficial – an injection, or a life-saving operation. We learn to recognise these anomalies and to develop rules to deal with them: “Accept pain administered by a trusted physician,” for example. A mixture of adaptation and calculation is better than either alone. Some people suffer a genetic abnormality in which they do not experience pain, and they generally die young through the accumulation of tissue injuries.

Pain is an example of an uncalculated reaction more effective than a more considered response. Pain is an evolutionary response – the result of natural selection, but in humans, the effectiveness of the response is reinforced by the learning that comes from painful experience. Adaptation describes the process through which experience evolves into skill.

Psychologist Gary Klein has studied the expertise of people with exceptional practical skills. One of his experiments involved showing videos of paramedics in action – some novice, some expert – to various observers. He discovered that both experienced paramedics and lay people were more successful at distinguishing the novices from the professionals than were teachers of paramedic skills. The teachers monitored adherence to the rules they taught and saw such adherence more often in the novices. Lay people, by contrast, didn’t know or care whether the practitioners were following the rules or not – they just valued results. And they saw results most often in people who had been well trained, had reinforced that training through experience and who stood out for their expertise.

But Franklin’s Rule – the idea that good decisions are the product of orderly processes – is more alive than ever in public affairs. The success of the physical sciences has encouraged us to believe there might be a science of decision-making. With its aid, all kinds of problems could be managed objectively. Such a procedure would lead every conscientious person to the same answer. As a result, both political and personal disputes could be resolved by the collection of evidence and the pursuit of rational discourse.

There is not, and never will be, such a science. Our objectives are typically imprecise, multifaceted and change as we progress towards them – and properly so. Our decisions depend on the responses of others, and on what we anticipate these responses will be. The world is complex and imperfectly known, and this will remain true however much we analyse it.

We do not solve problems in the way the concept of decision science implies because we can’t. The achievement of the great statesman is not to reach the best decision fastest, but to mediate effectively among competing views and values. The achievement of the successful business leader is not to articulate visions of the far future, but to match continuously the capabilities of the firm to the changing market environment. The test of financial acumen is to navigate successfully through irresolvable uncertainties.

. . .

Our approaches are iterative and adaptive. We make our choices from a limited range of options. Our knowledge of the relevant information and of what information is relevant, is imperfect. Different people make different judgments in the same situation, not just because they have different objectives, but because they observe different options, select different information, and assess that information differently; even with hindsight it will often not be possible to say who was right and who was wrong.

But the influence of Franklin’s Rule is deeply ingrained, and so we often use it to describe, falsely, how we arrived at our conclusions. In public decision-making there is an appearance of describing objectives, evaluating options and reviewing evidence. But it is often a sham. The objectives are dictated by the conclusions, with options chosen to make the favoured course look attractive, and data selected to favour the required result. What is described as evidence-based policy is, in reality, policy-based evidence.

Franklin’s Gambit is how we come to have dodgy dossiers and doctored intelligence reports. Franklin’s Gambit gives us “impact assessments” that are prepared after, not before, the favoured policy has been chosen. We play Franklin’s Gambit with models in which most of the numbers are made up, and can be reworked to generate any desired outcome. We devote hours to staff evaluations, quality assessments and risk reporting, but these hours are not really devoted to evaluation, assessment or reporting: they are spent ticking boxes.

Criticism of Franklin’s Rule is not an attack on reason in decision-­making, but an attack on a spurious notion of rationality. Reason is often contrasted with intuition, but “intuition” is a loose term. The same word is often used to cover the tacit knowledge that skilful practitioners develop through many years of experience, but also the ravings of madmen who hear voices in the air. But “intuitive” knowledge and expertise is subject to just the same tests of empirical validity as other claims to knowledge.

In Blink, Malcolm Gladwell uses the example of the Getty Kouros statue, whose provenance was “validated” by scientific testing, but which experts immediately perceived as fake. But the important point is not that the judgments were quick, but that the judgments were expert: that they were made by people who had a history of being right about such things. Their judgments reflected the eclectic knowledge and diverse methods that people with genuine skills employ in practical problems. Expert ­judgment is how we deal with a complex world – and assessing the quality of expert judgment is how we choose people to help us deal with it.

. . .

It is hard to overstate the damage recently done by leaders who thought they knew more about the world than they did – the managers and financiers who destroyed great businesses in the pursuit of shareholder value; the architects and planners who believed that cities could be drawn on a blank sheet of paper; and the politicians who believed they could improve public services by the imposition of targets. They failed to acknowledge of the complexity of the systems for which they were responsible and the multiple needs of the individuals who operated them.

The gravest cases of bad public decision-making of the past decade – the Iraq war and the credit expansion of 2003-07 – were predicated on an assumed knowledge of the world the decision-makers did not possess. The occupants of the Bush White House and the men who fulfilled senior roles in big banks not only believed they knew more about the world than they did, but supposed they had more influence on the environment in which they operated than they did.

Politicians imagined they could reconstruct the Middle East on the basis of an American model of lightly regulated capitalism and liberal democracy, although they had not the slightest knowledge or understanding of the societies they sought to remodel. The banking executives supposed they were in control of large institutions, when in reality the floors beneath them were occupied by a rabble of self-interested individuals determined to evade any controls on their activities. Financiers believed that models they did not understand enabled them to manage risks they did not understand, attaching to securities they did not understand. That is how the UK and US entered this decade with foreign policy in tatters, a financial system close to meltdown and a fiscal policy in disarray.

Successful decision-making is more limited in aspiration, more modest in its beliefs about its knowledge of the world, more responsive to the reactions of others, more sensitive to the complexity of the systems with which it engages. Complex goals are generally best achieved obliquely.

Print Friendly, PDF & Email