Saturday, November 1, 2014

"Ebolanoia" and the Fallacy of Composition

In "The Psychology of Irrational Fear," Olga Khazan ponders why Americans are afraid of Ebola when the flu presents a more immediate danger. She offers several reasons for "Ebolanoia" (e.g. that we're afraid of the unfamiliar), and like most pundits, she takes it for granted that fear of Ebola is irrational on the grounds that the chance of an American contracting Ebola is very near zero. Let's formalize the idea behind this thinking:
It is irrational for S to fear P when the probability that P will harm S is nearly zero.
Although this principle is true when S is obsessively concerned about being harmed by P, it is not simpliciter irrational for S to fear P even when P presents no immediate danger to S. For example, I'm not concerned about being killed by a tornado. However, I am concerned that someone somewhere will be killed by a tornado. Indeed, an average of 60 people in the United States are killed by tornadoes. That's why we have an elaborate warning system in place. Likewise, I'm not afraid of being killed by terrorists, but I am afraid of terrorists attacking someone somewhere in the United States.

What's gone wrong with the above principle is that it commits the fallacy of composition. This fallacy occurs when the properties of individual parts are assumed to be properties of the whole. For example, if every member of a club is 20 years old, it doesn't follow that the club itself is 20 years old. Likewise, the chance that any single American will be killed by a tornado is very low, but it doesn't follow that the chance of the United States experiencing a deadly tornado is very low. Indeed, it is almost certain that the United States will experience a deadly tornado in the future.

To be sure, Ebola does not present the same threat to the United States that tornadoes do--and probably never will--but given our rather laissez-faire approach to quarantine and the CDC's ineffective response to Ebola in Dallas, it is rational to be concerned about Ebola spreading. Initially, the conventional wisdom held that Ebola would not develop in the United States. Then two nurses contracted Ebola from treating an Ebola patient, prompting CDC director Thomas Frieden to admit that "Stopping Ebola is difficult." It's no wonder, then, that many Americans are concerned--not hysterical--about the presence of Ebola in the United States.

Tuesday, August 26, 2014

Richard Dawkins and the Morality of Aborting a Fetus with Down Syndrome

Richard Dawkins recently tweeted the following to a woman uncertain about what to do if she carried a fetus with Down Syndrome:
Abort it and try again. It would be immoral to bring it into the world if you have the choice.
Not surprisingly, both the bluntness and content of the tweet fomented an angry reaction. Dawkins responded in turn by softening the bluntness of the tweet:
Obviously the choice would be yours. For what it’s worth, my own choice would be to abort the Down fetus and, assuming you want a baby at all, try again. Given a free choice of having an early abortion or deliberately bringing a Down child into the world, I think the moral and sensible choice would be to abort.
Dawkins goes on to couch his opinion in the language of moral subjectivism:
I personally would go further and say that, if your morality is based, as mine is, on a desire to increase the sum of happiness and reduce suffering, the decision to deliberately give birth to a Down baby, when you have the choice to abort it early in the pregnancy, might actually be immoral from the point of view of the child’s own welfare. I agree that [this] personal opinion is contentious and needs to be argued further, possibly to be withdrawn.
This poses a difficulty: is he really a moral subjectivist (i.e. someone who asserts "what's right for me may not be right for you"), or is he making a standard utilitarian argument for the morality of aborting a fetus with Down Syndrome?

Given that he accepts a morality based on maximizing happiness and reducing suffering, I take him to be making a standard utilitarian argument:
  1. We ought to minimize suffering when we can.
  2. Aborting a fetus with Down Syndrome minimizes suffering.
  3. Therefore, we ought to abort a fetus with DS.
The argument is logically valid, but unsound due to both premises being false. The first premise is open to standard counter-examples to utilitarianism. Minimizing suffering could entail all sorts of human rights violations, e.g., sentencing an innocent man to jail to prevent a riot, killing a few innocents in order to alleviate the suffering of many, and so on. The second premise is factually dubious. It is not at all clear that people with DS suffer more than any other demographic. People born into broken, abusive homes experience difficulties well into adulthood. People born with non-DS handicaps also face many challenges in life. It is arbitrary to single out people with DS.

I don't find the utilitarian argument convincing, but I also want to make a further claim: the fact that a fetus has DS fails as a moral justification for aborting it. Here's my argument:
  1. Abortion is either morally permissible or morally impermissible.
  2. If abortion is morally impermissible, then (1) a fetus having DS no more justifies killing the fetus than it does killing an infant with DS.
  3. If abortion is morally permissible, then (2) a fetus having DS is a pragmatic justification for an abortion but not a moral one.
  4. Therefore, either (1) or (2) is the case.
  5. If either (1) or (2) is the case, a fetus having DS fails as a moral justification for aborting it.
  6. Therefore, a fetus having DS fails as a moral justification for aborting it.
The reasoning behind the second premise is that if abortion is morally impermissible, then a fetus having DS no more justifies killing it than an infant having DS justifies killing the infant. Put generally, S's having DS is not a sufficient justification for killing S. On the other hand, if it is morally permissible to kill a fetus (for example, on the grounds that it, unlike, an infant is not a person), then a fetus having DS simply raises pragmatic concerns about whether the fetus should be brought to term or not.

In either case, a fetus having DS fails as a moral justification for abortion. In the first case, it fails to be a sufficient justification for abortion. In the second case, it fails to be a moral justification at all.

Wednesday, July 16, 2014

The Argumentum ad Futuris, or Appeal to the Future

Although the argumentum ad hominem is often discussed in logic textbooks, the argumentum ad futuris, or appeal to the future, is seldom mentioned. Indeed, I can find only one brief discussion of the ad futuris, and that discussion treats the argument only in its fallacious mode. Presumably, however, as with the ad hominem, not all instances of the ad futuris are fallacious. In this post, I want to discuss the basic form of the ad futuris, look at some specific examples, and then try to formalize the argument and distinguish between its fallacious and non-fallacious versions.

What is the ad futuris?

In its basic form, the ad futuris is an appeal to potential future evidence, culminating in the claim that this evidence will both vindicate the truth of some proposition p and show the falsity of p's negation. For example:

Pat: There is no evidence of extraterrestrial life, so I suspect that there isn't any.

Jamie: Oh but surely we'll find evidence of extraterrestrial life in the future. The universe is so large that there is bound to be life out there, and we just need time to find it--or for it to find us.

Jamie’s implicit claim is that the truth of Pat’s proposition is dubious given the likelihood of future evidence against it. If Jamie is right, then we should not assent to Pat’s claim that there is no extraterrestrial life. A slightly more complex version of the ad futuris occurs when a theory makes a claim that isn't currently supported by the evidence:

Pat: Your theory makes this particular claim, but there is no evidence for it.

Jamie: True, but there isn’t a lot of data about that particular issue, and my theory still has more explanatory power than competing theories. I suspect that future data will provide evidence for this particular claim.

Jamie's point here is that it's rational to believe that future evidence will vindicate the claim in question and that we should still accept the theory that generates the as yet supported claim.

Specific Examples of the ad Futuris

An example of Jamie's reasoning occurs in Origin of Species:

That our palaeontological collections are very imperfect, is admitted by every one. . . . Only a small portion of the surface of the earth has been geologically explored, and no part with sufficient care, as the important discoveries made every year in Europe prove.

In addressing the paucity of transition forms in contemporary fossil collections, Darwin points to their incompleteness and notes how little of the earth had been explored. Presumably, collections would improve and provide more evidence for Darwin's views on transition forms.

Another example of the ad futuris occurs in response to John Searle's Chinese room argument, which argues that computers do not and cannot think. One response to Searle is that computers in the future will be able to think. For example, Ray Kurzweil writes:

[A]s for computers of the future that have the same complexity, depth, subtlety, and capabilities as humans, I don’t think we can rule out the possibility that they are conscious.

Moreover:

[H]umans are unable to directly transfer their knowledge to other persons. Computers, however, can share their knowledge very quickly. . . . Thus future machines will be able to combine human intellectual and creative strengths with machine strengths. When one machine learns a skill or gains an insight, it will be able to share that knowledge instantly with billions of other machines.

Kurzweil argues here that advances in computing will show that Searle's claim that computers cannot think is false and that it's reasonable to believe that future computers will have properties far more advanced than current computers.

The Form of the ad futuris

Because an appeal to future evidence cannot guarantee the truth of the claim in dispute, the ad futuris is an inductive argument. If the premises are true, then the conclusion is more likely to be true than not. Here's one way to formalize the ad futuris:

  1. The claim that p is the case is not currently supported by evidence.
  2. It is highly likely that evidence supporting that p is the case will be found in the foreseeable future.
  3. Therefore, p is the case.

The second premise does most of the argument's work. The phrase "highly likely" or its equivalent is necessary if the argument is to be inductively strong, that is, it must be more likely than not that future evidence will support that p is the case. The crucial feature of a non-fallacious ad futuris is that it provides good reasons for thinking that future evidence in favor of p will emerge. Thus, a fallacious version of the ad futuris merely appeals to future evidence without explaining why it is likely that future evidence will show that p is the case.

The phrase "foreseeable future" or its equivalent is important because an appeal to the distant future undermines the claim that we currently have good reasons for thinking that future evidence in favor of p will emerge. An appeal to a distant future merely postpones doubts about the truth of a claim based on some hypothetical time that supposedly will vindicate the claim. Hence, an ad futuris based on an appeal to a distant future is fallacious.

An ad futuris is also fallacious if current evidence tends to cast doubt on the claim that p is the case, because opposing evidence decreases the likelihood that future evidence will support p. Again, a good ad futuris provides reasons for thinking that the absence of positive evidence is temporary. Appealing to future evidence in the face of contrary evidence is special pleading.

Finally, a more sophisticated version of the ad futuris can incorporate non-evidential reasons for accepting that p is the case. For example, if a theory generates claims X, Y, and Z such that X and Y are well-supported but Z is not, one can appeal to the explanatory merits of the theory as well as the strong likelihood of future evidence for Z. In other words, inference to the best explanation can be incorporated into an ad futuris:

  1. The claim that p is the case is not currently supported by evidence.
  2. It is highly likely that evidence supporting that p is the case will be found in the foreseeable future.
  3. The theory that generates p is otherwise well-supported and is superior to its rivals.
  4. Therefore, p is the case.

In this version of the ad futuris, the likelihood that p is the case is bolstered by both foreseeable evidence and the explanatory superiority of the theory that generates it.

Further Reading

"Chinese Room Argument." Internet Encyclopedia of Philosophy.

Norman Geisler. Come Let Us Reason: An Introduction to Logical Thinking.

Ray Kurzweil. Are We Spiritual Machines? Chapter 6: Locked in his Chinese Room: Response to John Searle.

Wednesday, June 25, 2014

Five Reasons Why It's Dumb to Hate Philosophy

Neil deGrasse Tyson, the famous astrophysicist and science popularizer, recently made some disparaging remarks about philosophy. His remarks prompted several responses, one of the best coming from Massimo Pigliucci, a philosopher friend of Tyson's. Of course, philosophy has always had its detractors among intellectuals and non-intellectuals alike. Indeed, misology, the hatred of logical analysis and argumentation, dates back to Socrates' contemporary critics. My aim here is to defend a blunt claim: it's dumb to hate philosophy. In current Internet fashion, I want to offer five reasons why it's dumb to hate philosophy.

  1. Philosophy is the foundation of rational thought. Sound melodramatic? It's not. If you were tasked with writing a book about the history of rational thought, you would begin with philosophy and discuss much of its history. Philosophy began when certain ancient thinkers started to investigate how the world works without relying on tradition, myth, or authority. Instead, these thinkers relied on logic and observation to draw conclusions about the world. They were the first scientists. After them, we come to the big three in ancient philosophy: Socrates, Plato, and Aristotle. Socrates and Plato would develop the dialectic method, in which disputants seek the truth about a question through rational analysis, and Aristotle would write the first formal logic textbook.

  2. Speaking of logic, philosophy gave the world logic. If you take a course in logic, a philosophy teacher will teach it. Why is this a big deal? Logic is the study of arguments and rational inference. No matter how intelligent you are and how much empirical evidence you have, if you can't formulate a sound argument, you can't provide support for your position. Likewise, if you can't spot bad arguments, you're at the mercy of anyone who seems to present a semi-coherent case for his or her position.

  3. Statements about the nature and scope of science are philosophical, not scientific. For example, ponder this question: what is the difference between a good scientific theory and a bad one? That crucial question cannot be answered by science; it can only be answered by philosophical inquiry, that is, by the logical analysis of criteria that a scientific theory should meet before we can say that it is a good theory--or even a scientific theory at all. Scientists who engage in such analysis are doing philosophy, not science.

  4. Political philosophy shapes entire countries. Most people don't read Locke, Hegel, Marx, and Mill, but their ideas, for better or worse, influence our political processes. Locke's political philosophy, for example, underlies the U.S. government. Marx's political philosophy, filtered through Lenin and Stalin, shaped the Soviet Union. One of the most important questions in political philosophy is just how far government should go in intervening in the everyday affairs of civil society. This question is the reason for gridlock in American politics. Some Americans believe that government should have a minimal role in those affairs, whereas other Americans believe that government should play a large role in those affairs. The upshot is that this is a philosophical dispute.

  5. Philosophy majors perform very well on the GRE. Why? Because the study of philosophy flexes verbal, analytic, and quantitative muscles. Philosophy majors need to interpret texts, re-construct arguments, and understand formal systems of logic. Moreover, several famous executives majored in philosophy, as well as many successful people.

I suspect that the root of philosophy hatred is that people want philosophy to do something other than what it does. This is misguided, because, again, philosophy is the foundation of rational thought and logic. No, philosophy won't solve world hunger, drive you to the airport, or fix you a gourmet meal. But it will give you the tools to analyze arguments and clarify difficult problems.

Monday, January 20, 2014

Martin Luther King, Jr.'s "Letter from Birmingham Jail" and the Philosophy of Law

Although Martin Luther King, Jr. was not a philosopher, one of the best introductions to the philosophy of law is King's "Letter from Birmingham Jail." In 1963, King peacefully demonstrated for civil rights in Birmingham, Alabama, even though a judge had declared such demonstrations to be illegal. He was arrested and taken to Birmingham jail. While imprisoned, King read a newspaper article written by several clergyman criticizing his methods and calling on civil rights advocates to be more patient and not to violate the law. He wrote "Letter" in response. What's philosophically interesting about this exchange between King and his critics is that it illustrates two opposing philosophies of law: Natural Law Theory (NLT) and Legal Positivism (LP).

NLT and LP take opposing views on the nature of law:

Natural Law Theory Legal Positivism
The law is shaped by prior moral laws. The law is the decree of a sovereign authority.
The law must not conflict with moral law. The law does not take into account moral law.
A bad law is not a law. A bad law is still a law.
We are not obligated to obey bad laws. We are obligated to obey all laws.

NLT has its roots in Augustine and Aquinas, both of whom understood human law to be subordinate to higher forms of law. LP has it roots in Plato's Crito and Thomas Hobbes' Leviathan. Legal positivists believe that a law is nothing more than the decree of a sovereign authority and is not subordinate to any higher forms of law.

A key argument made by legal positivists is that it's inconsistent to obey some laws and not others. Indeed, Socrates chose to drink hemlock rather than to escape Athens on the grounds that it would be inconsistent for him to disobey Athenian law after a lifetime of benefiting from Athenian law (see the Crito). King's critics accused King of being inconsistent for obeying some laws but not others. King responded by invoking Natural Law:

You express a great deal of anxiety over our willingness to break laws. This is certainly a legitimate concern. Since we so diligently urge people to obey the Supreme Court's decision of 1954 outlawing segregation in the public schools, at first glance it may seem rather paradoxical for us consciously to break laws. One may want to ask: "How can you advocate breaking some laws and obeying others?" The answer lies in the fact that there are two types of laws: just and unjust. I would be the first to advocate obeying just laws. One has not only a legal but a moral responsibility to obey just laws. Conversely, one has a moral responsibility to disobey unjust laws. I would agree with St. Augustine that "an unjust law is no law at all"

This is classic NLT; a law that contradicts moral law is not a law at all, and we are not obligated to obey such laws:

How does one determine whether a law is just or unjust? A just law is a man-made code that squares with the moral law or the law of God. An unjust law is a code that is out of harmony with the moral law. To put it in the terms of St. Thomas Aquinas: An unjust law is a human law that is not rooted in eternal law and natural law. Any law that uplifts human personality is just. Any law that degrades human personality is unjust.

"Letter" is a powerful expression of Natural Law Theory and a masterpiece of argumentation. It's also a good example of how philosophy can inform the way we think about public policy.

Further reading:

"Letter from Birmingham Jail" by Martin Luther King, Jr.

Crito by Plato.

Wednesday, January 8, 2014

Video Games as Art

Can a video game be a work of art? In 2010, Roger Ebert angered gamers by insisting that "video games can never be art." The response to Ebert was book-length. The debate renewed recently when the Museum of Modern Art in New York announced its intention to display several video games in its Architecture and Design collection. Jonathan Jones of the Guardian responded with "Sorry MoMA, video games are not art." Not surprisingly, the counter-response to Jones was heated. I don't necessarily endorse the arguments made either by Ebert or Jones, but I agree that no video game can be a work of art.

Let's clear some brush. It's notoriously difficult to specify the necessary and sufficient conditions that make something a work of art. Nonetheless, it's not quite so difficult to specify at least one necessary condition that something has to meet in order to be a work of art. If our candidate doesn't meet that condition, then it's not a work of art. That's my approach here. My argument runs:
  1. Something is a work of art only if it is an object of aesthetic contemplation.
  2. Video games are not objects of aesthetic contemplation.
  3. Therefore, video games are not works of art.
Both of these premises are likely to raise eyebrows, if not blood pressure, so let me defend them. In regards to the first premise, think about the items that we put in the category "works of art": paintings, sculptures, poems, novels, short stories, plays, films, and musical compositions. Now think about things that we don't put in that category: lectures, non-fiction books, instructional manuals, rodeos, and bingo games. What these two categories have in common is that they include things intended for an audience. However, the items in the category "works of art" are intended to invoke an aesthetic reaction in the audience; the items in the other category are meant to edify, instruct, or entertain, but they don't invoke an aesthetic reaction. Grandma might yell "bingo!" with excitement, but that's not an aesthetic response. A lecturer might move an audience to tears, but that, too, is not an aesthetic response. Whatever a work of art is, it is intended to invoke an aesthetic response and hence is intended to be an object of aesthetic contemplation.

So why isn't a video game an object of aesthetic contemplation? After all, MoMA has put several video games on display. True enough, but let's think about the logistics of putting a video game on display in an art museum (indeed, MoMA has to deal with these logistics). Suppose we're the curator of an art museum, and we've been tasked with putting video games on display. Our first video game is Pac-Man in one of its original cabinet incarnations.


We put a Pac-Man cabinet on display with the requisite captions, and now patrons can view Pac-Man as easily as they can view the Mona Lisa. Hence, both are objects of aesthetic contemplation, right? Well, not quite. Here's the problem: we've displayed the Pac-Man cabinet, but we haven't displayed Pac-Man the video game. A video game is a computer program that is executed by hardware and displays a virtual world containing a challenge for a player to overcome. A video game is fully realized only when it is played.

Paola Antonelli, the curator of MoMA's video game collection, recognizes this problem:
For games that take longer to play, but still require interaction for full appreciation, an interactive demonstration, in which the game can be played for a limited amount of time, will be the answer. In concert with programmers and designers, we will devise a way to play a game for a limited time and enable visitors to experience the game firsthand, without frustrations.
The problem with this solution is that at best it only presents a slice of the video game, and at worst it turns the video game into an interactive movie or demo. To sharpen this point, let's imagine that we're tasked with putting Bioshock Infinite on display. Bioshock Infinite is a visually stunning game with a complex story; if any video game is a work of art, it is a work of art. However, can we make it into a work of aesthetic contemplation?

To answer this, let's expand the collection in our museum. First, we have the Mona Lisa. Next, we have a large television screen that plays Citizen Kane from start to finish in a continuous loop. Next to the screen is a quad-core computer with a high-end video card and monitor. Each day, before we open the door for visitors, we turn on the computer and start Bioshock Infinite. Our visitors can gaze upon the Mona Lisa. They can watch Citizen Kane from start to finish. What, however, can they do in regards to Bioshock Infinite? They can look at the computer; they can look at the monitor; and they can gaze upon the opening screen that tells them to "press any key":


At no point has Bioshock Infinite become an object of contemplation, let alone an act of aesthetic contemplation.

Let's, then, take Antonelli's approach and "devise a way to play a game." I assume that this means that visitors will be able to play a game for some amount of time. Note that the solution cannot be to record someone playing a game and then to display the resulting footage, because that's not the same thing as putting an actual video game on display. Suppose, then, that a visitor steps up to the computer running Bioshock Infinite and begins to play. What, now, do visitors see? They see a human playing Bioshock Infinite. They can contemplate this scene all they want, but it is not an object of aesthetic contemplation. It's a scene that takes place in homes all over the world. What, then, about the player? The player is, to state the obvious, playing a game. The player could do the same thing at home.

Indeed, this sets up a very basic argument against video games as art:
  1. No game is a work of art.
  2. All video games are games.
  3. Therefore, no video game is a work of art.
Let me defend the first premise by enumerative induction. Checkers is not a work of art; tennis is not a work of art; charades is not a work of art; tiddlywinks is not a work of art; Hungry Hungry Hippos is not a work of art; rock-paper-scissors is not a work of art; and so on. No game is a work of art. Hence, no video game is a work of art.

Now let's take the sting out of this conclusion: to say that something is not a work of art is not to say that it is trivial or less worthy or lacking in gravitas. I suspect that many gamers perceive a slight against video games in regards to the claim that they're not art. However, whether a thing belongs in a category or not hinges on the properties of that thing and not on the perceived motives of those who deny that it belongs in a certain category.

One final point: a video game is not a work of art, but it has artistic elements, namely, graphics and music. In fact, Jeremy Soule's soundtracks to the Elder Scrolls series are works of arts that can be enjoyed aesthetically on their own. Likewise, the visual assets of a video game can be enjoyed aesthetically on their own. Modern video games are not possible without artists and musicians, so modern video games do have an intimate connection to art. Bioshock Infinite would not be possible without artists, musicians, and actors. However, when all is said and done, it is not a work of art, because it is a game.

Monday, January 6, 2014

The Monty Hall Problem

Wacky game show host Monty Hall is giving you a chance to win a Ferrari. He has hidden the Ferrari behind one door, a goat behind another door, and a goat behind yet another door. He then asks you to choose a door, and you do so. Instead of opening the door you picked, Monty opens a different door, one with a goat behind it. (Monty never opens a door with a Ferrari behind it.)

Now he gives you a choice: keep what's behind the door you picked or choose the other unopened door. What should you do? It turns out that you should change doors, because doing so increases your odds of winning the Ferrari. The puzzle is explaining why this is the case when it looks as if your odds are 50/50. After all, there are now only two doors and a Ferrari behind one--that’s one desired outcome and two possible outcomes. Why does it matter which one you choose?

The reaction to this puzzle is perhaps more interesting than the puzzle itself. Most people, academics and non-academics alike, insist on the intuitive explanation, i.e., that it makes no difference whether you switch or not. One of Marilyn vos Savant's most famous columns discussed the puzzle, and she received a heated response from her audience for saying that you should switch. However, it can be demonstrated empirically that the probability of winning after switching is higher than the probability of winning when sticking to your original choice. There are computer programs showing that you win more times by switching than you do by sticking. So we need an explanation.

Let's start by altering the story a bit--but this won't change anything substantial. We're going to name each goat. Let's call one George and the other Harry. So now we have a Ferrari, George, and Harry, each behind a door. The probability of choosing any one of them is 1 in 3. The probability of choosing the Ferrari or George or Harry is 1; in other words, there is a 100% chance that you are going to pick a door, so long as you play the game. That sounds obvious, but it's going to help drive the explanation home. Another thing to keep in mind is that your chance of winning (i.e., picking the Ferrari) is 1/3 but your chance of losing (i.e., picking a goat) is 2/3. Let’s recap:

  • Pr of choosing F = 1/3
  • Pr of choosing G = 1/3
  • Pr of choosing H = 1/3
  • Pr of choosing any door = 1
  • Pr of choosing either G or H is 2/3

Now you choose a door. Monty goes to work and opens one of the doors you didn't pick. George is behind it munching on some hay. So the possibility of choosing George has been eliminated. Nonetheless, you're given a second choice to make: stick or switch. Note that the probability of choosing a door is still 1. Hence, one way of stating the puzzle is this: how do we keep this number constant, since it has to remain constant? Let’s illustrate this by removing George from our list:

  • Pr of choosing F = 1/3
  • Pr of choosing H = 1/3
  • Pr of choosing any door = 1

The problem is that 1/3 and 1/3 don’t make 1. So the probabilities must be recalculated. But how? The intuitive solution is this:

  • Pr of choosing F = 1/2
  • Pr of choosing H = 1/2
  • Pr of choosing any door = 1

Because 1/2 and 1/2 do make 1, it looks as if we've solved the problem. However, according to the empirical evidence, this is the solution:

  • Pr of choosing F = 1/3
  • Pr of choosing H = 2/3
  • Pr of choosing any door = 1

This gives us the same result: it keeps the probability of choosing a door at 1. It is also supported by the empirical evidence. So what has gone wrong with the former solution? In a nutshell, it neglects to take into account new information that is revealed when Monty opens a goat door.

Consider again: you initially have a 1/3 chance of winning the Ferrari and a 2/3 chance of not winning it; you have a low chance of winning and a high chance of losing. To put this into sharper contrast, suppose that there are a hundred doors. Now you have a 1/100 or 1% chance of winning and a 99/100 or 99% chance of losing. That means that the winning door is most likely in the set of doors you didn't choose. This fact doesn't change when Monty begins opening doors. What does change is that the cardinal number of the set of closed doors--it gets smaller. (A set's cardinal number is the number of members in the set.) But remember, the probability of the winning door being in the set of doors you didn't choose stays the same. To make this clearer:

  • Let X = set of doors you didn’t initially choose
  • Let Y = set of open doors
  • Let Z = set of closed doors

X and Z initially have the same cardinal number: 999. Monty opens one of the doors. Z now has 998 doors and Y has 1 door. The cardinal number of Y continues to increase, and the cardinal number of Z continues to decrease. The probability, as we've seen, that the winning door is in X is 99%. But the cardinal number of X is simply the sum of the cardinal number of Y and the cardinal number of Z; the cardinal number of X hasn't changed, and hence neither has the 99% probability that X has the winning door. What has changed is the number of doors in Z, a number which gets smaller and smaller as Monty opens more goat doors. Eventually Z is left with one member, and since that member is the only closed door in a set of doors having a 99% chance of having the Ferrari door, it has a 99% chance of being the Ferrari door.

Therefore, you should switch when Monty gives you the chance. The same is true when there are only three doors, although the chance of winning is 2/3 rather than 999/1000--still good odds in your favor. The heart of the paradox is that Monty always opens a goat door, and in so doing gives us information about the remaining closed doors belonging to the set of unpicked doors. Monty is telling us: "The likelihood of your door being the winning door is lower than the likelihood of the winning door being in the set of doors you didn't pick. But I’m going to help you out. I’m going to eliminate possible winners out of that high-probability set, and what's left over is still going to have the high probability of the entire set. So the smart money is on the door that I don't open, and the safe bet is to switch your allegiance to that door.”

Further reading

"The Monty Hall Problem" by Dr. Math.

"Game Show Problem" by Marilyn vos Savant.