There is a classic mathematical nuisance known as the Monty Hall problem which can be hard to wrap the mind around. It is named after the classic game show “Let’s Make a Deal,” where a contestant was allowed to choose one of three doors, knowing that a valuable prize waited behind one, and worthless prizes behind the others.

On the show, once the contestant made their choice, Monty Hall (the host) opened one of the other doors, revealing one of the worthless prizes. He would then open the contestant’s chosen door to reveal whether they picked correctly. The Monty Hall problem asks, what if the contestant were allowed to change her door choice after she saw the worthless prize? Would it be to her advantage to switch doors? In other words, if the contestant guesses that the new car lay behind door #1, and Monty opened door #2 to reveal a goat, is the new car more likely to be behind door #1, or door #3?

At this point, the imperfect wad of meat called the “brain” fires up it neurons, and usually informs its owner that revealing the contents of one of the other doors simply changed the contestant’s odds from one-in-three to fifty-fifty. But that isn’t the case. It has been mathematically proven that if the contestant were allowed to switch her door to #3 after seeing the goat behind #2, she’d be twice as likely to win.

How can this be? It isn’t intuitive, but it’s true. Great mathematicians have puzzled over this, as well as great scientists at Los Alamos and professors at MIT.

The best way to look at it is to realize that the door-opening is essentially misdirection in this problem. When the contestant selects a door, she is dividing the doors into two sets: A) The doors she DID select, and B) the doors she did NOT select. She already knows that at least one of the doors in set B is not a winner. The fact that Monty opens one to reveal a goat doesn’t add any information, and it doesn’t improve the one-in-three chance the contestant had at the initial picking.

Launch Monty Hall Simulator →

In explaining the effect, it helps to increase the scale of the question. Imagine that there are 100 doors to choose from instead of three, but still just one prize. When hypothetical contestant Contessa chooses her door, she effectively divides the doors into Set A that contains her one door (1% chance of including the prize), and Set B that contains 99 doors (99% chance). Our imaginary Monty then proceeds to reveal goats behind 98 of the 99 doors in Set B, skipping over one seemingly random door. The odds that Contessa picked the winning door on her first try remain at one-in-a-hundred, so when asked if she wants to keep her original door or switch to that one other unopened door, the better answer is more obvious. Monty is essentially asking, “Do you want to keep your door and its chance of winning, or take all 99 of the other doors and their chance of winning?”

The problem is that the human brain is hard-wired to seek out patterns, discarding much of the non-patterned data. This system usually works very well in keeping unimportant information from overwhelming the mind, but occasionally too much information ends up on the cutting-room floor. There is another problem called the Gambler’s Folly which also illustrates the mind’s lackluster performance in gaging probability: Imagine that you flip an ordinary coin 99 times, and amazingly, it comes up heads every time. What are the odds that it will come up heads again on the 100th flip? Most people would say that it’s a very unlikely possibility, but it turns out that the odds are exactly 50% (overlooking the negligible chance that the coin will land on its edge).

The reason we think otherwise is because our pattern-oriented brains see the 100-flip scenario as extremely rare, which it is⁠—it has a one in 1,267,650,600,228,229,401,496,703,205,376 chance of happening. But if you sit down and write out any random sequence of heads and tails, it has the exact same odds of appearing as does 100 heads. The typical human brain just doesn’t assign random sequences the same significance as clear patterns⁠—such as 100 heads on a coin flip⁠—so the importance of the pattern is artificially inflated.

The human brain is a great pattern recognition engine, but sometimes that causes it to overlook the subtleties of numbers.

70 Comments