You are offered the choice of two envelopes and are told that one contains twice as much money as the other. You make your choice, open envelope one, and find that it contains $100. The referee asks if you would prefer envelope two. Since one envelope contains twice as much money as the other, but you do not know whether you have chosen the larger or the smaller, you know that the second envelope contains either $200 or $50, so you stand to gain $100 or lose $50 by switching from envelope one to envelope two. Do you switch?
This is the ‘two envelope problem’ which Mervyn King and I describe in our book Radical Uncertainty. The paradox is that, while it seems to make sense to switch, the implication of switching is that whichever envelope you choose initially, you have made the wrong choice. That conclusion defies common sense and a considerable literature attempts to define the ‘right’ answer.
The source of the difficulty is that the problem is not sufficiently well defined. Here are some versions of it which seem to give enough background to enable you to make a decision.
Version 1. The referee was given two sums of money, one twice the size of the other, and tossed a coin to decide which one to put in each envelope.
Version 2. The referee made a random drawing of a number on an interval (A, B). He then put that amount of money in one envelope and twice as much in the other.
Version 3. After you’ve opened the envelope the referee offers to toss a coin and depending on whether it comes heads or tails he will either double or halve the amount of money you receive.
If you apply the criterion of maximising the expected value of a gamble, the answer to version 1 is that it doesn’t matter what you do since the expected value of each envelope is the same. The solution to Version 2 is that if the amount of money you find in the first envelope is less than ½ ( A+B) you should switch, and if it is more you should stick with your initial choice. And if you find yourself in version 3 you should accept the referee’s offer to toss a coin.
So which is the real problem, and the correct solution? You don’t have enough information to know. So the answer to the question ‘what should you do if faced with the two envelope problem’ is ‘I don’t know’. Welcome to the world of radical uncertainty.
A mathematician will, of course, quite properly insist that you define the problem exactly so that the unique solution can be established. But the practical economist does not have this option. That is the nature of radical uncertainty.
Many theoretical economists will make all necessary additional assumptions in order to arrive at a ‘rigorous’ answer. Many young MBAs in consulting firms will make up whatever numerical information is needed to ‘predict’ the outcome of your choice. It is not clear what is learnt from either of these exercises.
So what should the decision-maker do? Begin by asking ‘what is going on here?’ The trouble with the present case is that nothing is going on here. The problem is devoid of any context, which distinguishes it from any situation likely to be faced in the real world. Often, these kinds of problem are set in the laboratories of behavioural economists, who have normative models in their minds and who hope to show that individuals fail to conform to their models of ‘rationality’. The shrewd participant has the options of giving the answer the professor wants to hear – and helping him or her publish another paper describing the ‘biases’ displayed by supposedly ‘rational’ individuals; or of giving what the participant thinks is the right answer; or of giving the answer which the participant thinks the professor thinks is the right answer. It is a dilemma which clever students have faced since the time of the ancient Greeks.
If someone was confronted with the two envelope problem in real life, they would look for clues from the context. Who has organised the event and what might be their motives? Does the referee seem friendly or hostile? Does he seem to know what is in each envelope? Is one envelope fatter than the other? They would draw on their wider knowledge of the world and the visual clues that have such an important influence on our actions. They would very sensibly ask ‘what is going on here?’
Building a model which is a mathematically tractable approximation to an ill-defined problem in the real world is often a useful strategy. But it is a useful strategy because of the insight it may give you into the real problem, not as a way of resolving the real problem.
Version 1 resembles the philosophical problem often labelled as ‘Buridan’s ass’. A thirsty and hungry donkey is placed equidistant between a trough of water and bale of hay and dies of thirst and hunger because there is no rational reason to choose one option rather than the other. What a decision is may be less important than making some decision. ‘Elliott’, patient of the neurophysiologist Antonio Damasio, had suffered brain damage which destroyed his capacity for emotional response. As a result he would spend very large amounts of time debating trivial issues, such as the time of his next appointment.
Ulysses Grant, the most successful of civil war generals and future US President, responded to the question ‘are you sure you are right?’: ‘No, I am not, but in war anything is better than indecision. We must decide. If I am wrong, we shall soon find it out and can do the other thing. But not to decide wastes both time and money, and may ruin everything.’ Analogous issues are reported in modern electronics, where continuous variables must be translated into binary ones or vice versa and only microseconds are available for the choice. Almost everyone has experienced the committee which deliberates endlessly because there always might be a better option than the proposal under discussion.
So version 2, in which the choice is whether to take up a firm offer or to reject it in favour of an alternative that might or might not be better is a common issue; one with which decision makers are very often faced. What college to go to, what house to buy, which potential partner to marry? A strategy which considers the range of possible outcomes and accepts the offer which meets a predetermined threshold is often a good way of handling these issues. In the real world, satisficing – looking for a solution that is good enough rather than the best possible – is generally superior to optimisation in conditions of radical uncertainty. The model developed here for version 2 is in the family of the threshold strategies developed by game theorists and the stopping rule, or secretary problem, discussed in decision theory.
Version 3 raises the question of ‘when is it right to maximise the expected value of a gamble?’ If you are faced with repeated problems of similar kind – if you go to the casino to play roulette every night – then in the long run you will be better off if you perform expected value calculations. However, people who perform expected value calculations are almost certainly not people who go to the casino very often. Casinos and betting shops rely on the patronage of those who don’t.
Many problems are essentially unique. When should the coronavirus lockdown be lifted? In a one-off decision like that maximising expected value may or may not be the right thing to do; it is certainly wise also to ask the question ‘how would I feel if it went wrong?’ Minimising regret is a natural and often sensible approach. Maximising expected value seems appropriate if you are faced with a series of problems which are ‘similar’ to each other, but what does ‘similar’ mean? The regret felt as a result of a bad investment may be offset by the joy of making a good one, but the regret at having chosen the wrong college may not be offset by the joy of finding the right partner – the two emotions are simply not additive.
The value of economic models is to be found not in their ability to make specific predictions or recommendations, because any such model is likely to be at best a very rough approximation to the world of radical uncertainty, but in yielding insights which can be applied in a range of practical problems. And only a few economic models actually do. Models of the two envelope problem do yield such insights, but these are not the ones that are apparent at first sight. The two envelope problem is intriguing and illuminating, but not because it gives an answer to the question ‘what should you do if confronted by two envelopes?’