Ergodicity
Radical Uncertainty begins with two signature examples of the misuse of probabilistic reasoning.- the decision by Obama to order the raid by the Seals on Osama bin Laden’s compound and the attempt by Goldman Sachs, in common with other banks, to assess the risks in their portfolios. Both of these problems illustrate the common position of decision-makers in business, politics and finance who face irresolvable uncertainties but must still act. Moreover, both examples have the characteristics of what we describe as ‘large world’ problems. The distinction between ‘small world’ and ‘large world’ problems is due to ‘Jimmie’ Savage, coauthor with Milton Friedman of the subjective expected utility (SEU) approach to uncertainty, a methodology. now dominant in economics, especially financial economics.
That distinction between small and large worlds may be described as one between ‘puzzles’ and ‘mysteries’. Puzzles are completely specified problems with unique solutions – and even if these solutions are hard to compute, all competent observers will agree on the solution once it has been determined. ‘Mysteries’ are incompletely described: not all options are necessarily completely specified, and it may not be obvious even after the event which was the appropriate course of action. Computers outperform humans in small worlds because they don’t make computational errors. Computers find solutions in large worlds only by finding analogous small-world problems, and the analogies may or may not be useful.
The distinction between mysteries and puzzles is due to Greg Treverton, former US national security adviser, and is parallelled in most practical subjects – tame and wicked problems in urban planning and medicine, aleatory and epistemic uncertainty in engineering. The description of ‘black swans’ and the distinction between known and unknown unknowns describes similar issues but has been so misused in popular discourse as to be no longer helpful. The parallels in legal reasoning are particularly complex and instructive and are discussed at some length in our book.
SEU is derived from the idea that the axioms of rational choice generally used by economists to analyse consumer behaviour can be extended to choice under uncertainty. SEU claims to be both normative and descriptive – its proponents claim both that its axioms are definitive of rationality and that individuals are generally rational. The principal, and certainly most powerful, argument for this position is that individuals who do not behave as the axioms require can be “dutch booked’ – confronted with a series of bets whose overall result is a certain loss. This is evidently a ‘small-world’ proposition – only in small worlds can this outcome be definitively identified and the argument found compelling.
Completeness is a key axiom of rational choice. In SEU, this axiom asserts that individuals have preferences over every imaginable gamble. These preferences are subjective, resulting from a combination of subjective probabilities – Bayesian priors – and utilities – subjective evaluation of the gains and losses associated with the gambles. By posing problems with identifiable objective probabilities, insight may be obtained into the origins of these preferences – measuring the ‘risk appetites’ of respondents. Financial advisers are formally required to do this and similar procedures are widely employed less formally.
The now extensive literature of behavioural economics demonstrates many systematic failures by subjects to apply SEU correctly in small-world problems – i.e., SEU is inadequate as a descriptive theory. Those who conduct these studies maintain the normative value of SEU and therefore describe deviations from it as ‘biases’. The solution is to educate the respondents better in the precepts of SEU.
SEU downplays the dynamic elements in choice under uncertainty. Essentially, agents are assumed to make once and for all decisions reflecting future choices, as well as immediate ones. The unrealistic nature of this assumption is mitigated, but not much, by the device of ‘contingent commodities’ – future decisions may be contingent on unknown but pre-defined outcomes of future events. However, the computational demands of such optimisation are plainly considerable – and agents might adopt a range of possible simplifications to reach decisions. The LML Danish experiments appear to be intended to identify plausible simplifications and find one that provides a better approximation to the solutions agents identify in these ‘small-world’ problems.
Even in ‘small worlds’, many further problems arise. For example, agents are unlikely to be indifferent to intermediate states on a dynamic path. And in the context of portfolio optimisation, there may be no finite time horizon or none that is identified.
But the more significant problem is that we do not live in small worlds. Agents do not have, and cannot have, even a small fraction of the information needed to make the relevant calculations implied by these models. The axiom of completeness, far from being required by rationality, implies irrationality in a world where information is imperfect and unequally distributed. Rational individuals do not hold – or if they do hold sensibly do not act on – prior probabilities when they have little knowledge and recognise that others may have different, and possibly superior, knowledge. Most rational people do not visit betting shops or casinos, and most of those who do visit them lose money. Most rational people are rightly wary of financial markets – the venues which provide the closest approximation to the ‘small worlds’ of SEU. We term this the Guys and Dolls problem, after the words of Damon Runyon which Marlon Brando memorably delivered in that film; “Son,” the old guy says, “no matter how far you travel, or how smart you get, always remember this: Some day, somewhere,” he says, “a guy is going to come to you and show you a nice brand-new deck of cards on which the seal is never broken, and this guy is going to offer to bet you that the jack of spades will jump out of this deck and squirt cider in your ear. But, son,” the old guy says, “do not bet him, for as sure as you do you are going to get an ear full of cider.”
In ‘large worlds’, problems are typically ill-defined and unique, as were those faced by Obama and Goldman Sachs. As we write, airline executives are anxious to know when operations will return towards normality, and public health experts are uncertain how new variants of the covid virus will develop. The political scientist Philip Tetlock, whose ‘good judgement’ project has produced interesting observations on forecasting political and business events under uncertainty, attempts to define these issues more precisely. He frames the questions; When will the US Transportation Security Administration next screen 2.3 million or more travelers per day for three consecutive days? When will a SARS-CoV-2 variant other than Delta next represent more than 70.0% of total COVID-19 cases in the US? But in achieving such precision and asking for unambiguously falsifiable predictions, he has left behind the vaguer questions to which decision makers really want answers. Should Obama send in the Seals? How much capital should Goldman Sachs deploy to protect the bank against extreme risk?
In large worlds, it is not necessarily possible to identify good and bad choices, even with hindsight. The attack on bin Laden was successful; the 1979 rescue bid for the Iranian hostages was not. But that does not demonstrate that Obama was a good decision maker – perhaps he was lucky, and Carter simply encountered a long-tail event. Goldman Sachs survived the financial crisis, not because it was adequately capitalised, but because the US government bailed the bank out, a contingency which it could reasonably but not certainly have anticipated.
SEU and variants of it fail to be relevant to choices in ‘large worlds’. The better course is not to derive alternative axiom sets to those of SEU, which could only ever be applicable to a limited set of ‘small-world’ problems, but to study the practices of good and bad decision makers, and to understand how humans have developed capacities to be good decision makers. Many of the deviations from SEU identified in ‘small world’ experiments are not ‘biases’ but adaptive responses to complex, unknown worlds. It is premature to conclude, for example, that ‘regret’, the emotional response which attaches more weight to a loss than to a gain of similar magnitude even if both quantities are small relative to existing wealth, is irrational without an appreciation of the wider context. The child who puts a hand on a hot stove experiences regret and does not do so again. The experience of regret is a necessary part of learning.
Generally, uncertainty is the result of imperfect information – necessarily about the future, often about the present and even about the past. Sometimes, such uncertainty is resolvable. More information can be found which is sufficient to allow a decision to be made with confidence. In some instances, the uncertainty can be described probabilistically, either because there is a known underlying process which can be described mathematically – the gambling games for which probabilistic mathematics was invented – or because underlying stationarity enables probabilities to be deduced from extended data observations – the life insurance and motor accident tables compiled by actuaries.

Other decisions are characterised by radical uncertainty. Typically, radical uncertainty arises from three issues, which often operate in combination. The problem may be inherently ambiguous or vague. ‘Vagueness’ sounds like a failure of human communication, but is a general philosophical problem – recognised since antiquity as the Sorites problem. The statement “it will rain” is ‘vague’, and for meteorologists means that at some point in the relevant area for which the forecast applies there will be some precipitation during the relevant time period. The UK Meteorological Office explains: ‘by “any precipitation” we mean at least 0.1mm, which is about the smallest amount that we can measure’. But to be told, even with certainty, that somewhere in the UK tomorrow there will be rain – possibly an amount so small that you would not even notice it – is not a basis for deciding whether to take an umbrella or postpone your daughter’s wedding.
Often it will be impossible to identify all possible outcomes, or to do so precisely. What exactly were the contingencies which the Goldman Sachs risk modellers were seeking to avoid? Or the full range of results that the American intervention in Afghanistan might have hoped to achieve?
And even if the underlying processes are known and capable of being quantitatively described, they do not remain stationary. One important cause is reflexivity – unlike most physical processes, economic processes are influenced by our beliefs. If the Lehman bankruptcy and its consequences in September 2008 had been anticipated, it would not have occurred as it did, because both Lehman and other actors would have behaved differently. The belief that historic data series could be used to predict future mortgage defaults led to the offer of mortgages to borrowers with characteristics different from those borrowers from which the historic series had been derived.
Radical uncertainty is the norm and the range of problems in business, economics and finance to which ‘small world’ models can be applied is very limited. Models can be used fruitfully in these spheres to illustrate scenarios and judge their robustness to assumptions but that is not the same as using them to make forecasts or predictions about human behaviour.