# The Bayesian meets Monty Hall

Posted on

The Monty Hall Problem is a probability-theory classic. Here’s how it goes.

You are on Monty’s game show, trying to find a car hidden behind one of three doors. Monty knows which one; you don’t. He asks you to choose a door, and you do. But he doesn’t open your door. Instead, he goes to the other two doors and, per the rules of the game, opens one that he knows does not contain the car. After making this revelation, he asks if you want to stay with your first choice or switch to the other remaining door.

Should you stay or switch?

The answer – spoiler alert! – is that you should switch: it doubles your chances of finding the car.

This conclusion, however, is hard for many people to believe. According to Wikipedia, “when the Monty Hall problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine claiming the published solution – switch! – was wrong.” Because the problem seems to defy intuition, it has been the subject of much study.

But most explanations I’ve seen are unsatisfying because, at one point or another, they require an intuitive leap that many people can’t make. The reader either has the necessary spark of realization at the right moment, or is left behind.

I’m going to try to offer a more satisfying explanation by working through the problem using nothing but Bayesian probability theory and showing every step. Using this method, there are no leaps to be made: everything follows naturally from the theory itself and from the knowledge we have of the problem.

Ready? Let’s get started!

### The Bayesian approach

To begin, let’s define some propositions to formalize our discussion of the problem:

• C=i: the car is behind door i (for i = 1, 2, or 3)
• Y=i: you first choose door i
• M=i: Monty reveals that the car is not behind door i
• K: Our prior knowledge about life, the universe, and everything, including the rules for the television game show and our understanding of Monty’s incentives

(For a refresher on propositions and probabilities and our notation, refer to our earlier discussion, More on the evidence of a single coin toss.)

Now let’s formalize our knowledge about these propositions.

### Representing our initial knowledge

To represent our knowledge, we’ll use probability equations. First, given our prior knowledge K, we have no reason to believe the car is more likely to be hidden behind any of the three doors; therefore, we must consider each C=i proposition equally likely:

P(C=1|K) = P(C=2|K) = P(C=3|K).

Further, the car must be hidden behind one of the three doors:

P((C=1 ∨ C=2 ∨ C=3)|K) = 1,

and by the sum rule for mutually exclusive propositions:

P(C=1|K) + P(C=2|K) + P(C=3|K) = 1.

Therefore, solving for the individual probabilities, we must assign 1/3 to each:

P(C=1|K) = P(C=2|K) = P(C=3|K) = 1/3.

### Adjusting our knowledge in light of you choosing a door

Now, let’s say you choose a door. We’ll call it door 1, which we are free to call it because, so far, we haven’t made any assignments between door numbers and physical doors. So, we make the first assignment now: the door you chose we will call door 1.

Therefore, we now know Y=1. Let’s adjust our probabilities in light of this new evidence.

The quick way is to realize, since you don’t know anything about where the car is hidden, that knowing which door you chose cannot provide us with any new knowledge about which door the car is behind. Therefore, this new evidence shouldn’t change the corresponding probabilities:

P(C=1|Y=1∧K) = P(C=2|Y=1∧K) = P(C=3|Y=1∧K) = 1/3.

But, if we want to do the math, we see that the Bayesian probability adjustments give the same result. Recall the formula for updating a probability in light of new evidence:

(new plausibility) = (old plausibility) × (evidence adjustment)

For door 1, then,

P(C=1|Y=1∧K)
= (old plausibility) × (evidence adjustment)
= P(C=1|K) × [ P(Y=1|C=1∧K) / P(Y=1|K) ]
= P(C=1|K) × [ P(Y=1|K) / P(Y=1|K) ] { since you don’t know C=1 }
= P(C=1|K) × 1
= P(C=1|K)
= 1/3.

The calculations for the other two doors work out identically.

### Adjusting our knowledge in light of Monty’s revelation

Now Monty opens a door to show you that it wasn’t hiding the car. We’ll call it door 2. Thus, we now know M=2. Let’s update our beliefs in light of this new evidence.

First, in light of Monty’s revelation, we know that the car is not behind door 2:

P(C=2|M=2∧Y=1∧K) = 0.

But, for the remaining doors, 1 and 3, let’s turn to the adjustment formula for guidance. Before doing any calculations, however, let’s write the formulas for both doors:

P(C=1|M=2∧Y=1∧K) = P(C=1|Y=1∧K) × [ P(M=2|C=1∧Y=1∧K) / P(M=2|Y=1∧K) ]

P(C=3|M=2∧Y=1∧K) = P(C=3|Y=1∧K) × [ P(M=2|C=3∧Y=1∧K) / P(M=2|Y=1∧K) ]

Note that the prior probabilities are equal: P(C=1|Y=1∧K) = P(C=3|Y=1∧K) = 1/3. Also, the evidence-adjustment factors have the same denominator, P(M=2|Y=1∧K). Thus, we can cancel these terms if we divide the second equation by the first, to arrive at the ratio of our adjusted probabilities. This ratio will tell us how strongly we should prefer door 3 to door 1.

After canceling those terms, the calculations are straightforward, in light of our knowledge of the game. We take particular advantage of the knowledge that, after you make your choice, Monty must reveal to you that one of the remaining doors does not hide the car. That is, he must open one of the other two doors, but if one of them is hiding the car, he cannot open that one.

P(C=3|M=2∧Y=1∧K) / P(C=1|M=2∧Y=1∧K)
= P(M=2|C=3∧Y=1∧K) / P(M=2|C=1∧Y=1∧K) { by the adjustment formula and canceling }
= 1 / P(M=2|C=1∧Y=1∧K) { given K, we know Monty must open 2 if C=3 ∧ Y=1 }
= 1 / (1/2) { given K, we know of no reason for Monty to prefer 2 to 3 if C=1 ∧ Y=1 }
= 2.

Therefore, our final belief that the car is behind door 3 should be twice as strong as our belief that it is behind door 1. Switch!