Thomas Bayes died over 200 years ago, but his legacy is still with us and provides some very useful insights into probability. What is his legacy? It is a probability formula that tells us how to update probabilities in light of new information.
Suppose, for instance, you learn that Fred is a physical fitness fanatic who works out in the weight room at 7:00pm every Monday, Wednesday, and Friday. Let’s say he’s been doing this for five years straight, without missing a workout. Suppose today is Monday. It’s morning. What’s the probability that Fred will be working out in the weight room tonight? Pretty high, you say, even close to 1.
That’s right. But suppose now you also learn that Fred was hit last night by a Mack truck and is in ICU. What does that do to your probability of Fred working out tonight? Your probability will go down precipitously. This new information about Fred being hit by a truck, when factored in with the old information about Fred being a fitness fanatic, drastically changes the probability of Fred working out tonight.
That’s the essence of Bayes’ Theorem, factoring new information in with old information to determine the updated probability of some event, hypothesis, or state of affairs. Bayes’ Theorem does this updating with mathematical precision.
To see how this works in a very practical situation, imagine you’re worried that you have some rare disease. Let’s say the disease has an incidence of only one in a thousand in the general population. Suppose next you have some blood work done to check whether you have the disease, and it comes back positive. How likely is it that you have the disease?
The problem is that all medical tests are fallible, giving false positives (i.e., outcomes that indicate the presence of the disease when in fact it is not present) as well as false negatives (i.e., outcomes that indicate the absence of the disease when in fact it is present). Suppose that this test gives one percent false positives and one percent false negatives (this would be a really really good medical test–most are not as reliable).
So, coming into the test, you had a 1 in 1,000 probability of having the disease. Having taken the test, which has now come back positive, what’s your probability of having the disease? Many people, looking simply at the proportion of false positives, would say that they now have a 99 in 100 probability of having the disease. After all, the test came back positive and the test delivers false positives only one percent of the time.
But, in fact, your chance of having the disease is much much lower than 99 in 100. That’s because false positives focus exclusively on those who actually have the disease, and only a very small proportion of the population fits that bill. Coming to the test, you had only a 1 in 1,000 probability of belonging to that part of the population where false positives apply.
If you now apply Bayes’ Theorem, it turns out that your actual probability of having the disease–even given that your test came back positive–is only 9 in 100. So it’s still quite unlikely that you have the disease.
In this calculation, three probabilities end up being crucial: (1) the prior probability of having the disease, which is determined by demographic data and equals .001; (2) the probability of the test coming back positive given that you actually have the disease, known as the likelihood, which in this case was .99; (3) the posterior probability that you actually having the disease given your test came back positive, which equals .09.
Basically, Bayes’ Theorem says that the posterior probability (i.e., the probability given the most up to date data–this is the probability you most want to know) equals the prior probability times the likelihood (along with some constant factor):
posterior probability = constant x likelihood x prior probability.
Merely eyeballing this formula, even without knowing precise values, tells you a lot. Suppose your prior probability is extremely low. For instance, you might have a very low prior probability that God exists. But then you come across some event (a miracle?) whose likelihood is very high given God’s existence. Depending on the precise details, your posterior probability that God exists could rise considerably.
Arguments such as this abound in the theistic and atheistic literature, with theists arguing that Bayes’ Theorem confirms God’s existence and atheists arguing that it disconfirms God’s existence. Both are using Bayes’ Theorem, but plugging in different numbers for the prior probabilities and likelihoods, and thus getting different answers.
Who’s right? That’s tough to answer. Unlike medical tests, where the prior probabilities are well settled through demographic data and where the likelihoods can be precisely determined experimentally, in arguments over God’s existence, prior probabilities tend to be quite subjective and the likelihoods are not much better.
We therefore see that conclusions drawn from Bayes Theorem are only as reliable as the numbers that are put into it. Bayes’ Theorem works great for medical tests, much less well for arguments in philosophy of religion.
So there you have the bare bones of Bayes’ Theorem. It provides a mathematical calculus for updating probabilities in light of new information or evidence. It works by relating prior probabilities and likelihoods to posterior probabilities, which are the updated probabilities that we’re most interested in.
The one thing we haven’t done here is give a precise mathematical formulation of Bayes’ Theorem. We’ll leave that to you, the reader. There are plenty of places on the web that do this. Keep this article on hand if you want to understand what that mathematical formulation really means.