Living in a Bayesian world

October 30, 2009 in Math

Increasingly, I've noted in my discussions with statisticians and practitioners a reliance on Bayesian methods. Bayesian statistics rely on an understanding of the uncertainty of a hypothesis. For example, Bayesian hypotheses are literally updated as new information becomes available. Bayesian analyses will also rely heavily on conditional probabilities, or the understanding of likelihoods that depend on the occurrence of related events. One of the biggest Bayesian proponents is Professor Andrew Gelman, who maintains an excellent blog and is involved in

In some ways, Bayesian methods have become a bit fad-like and, as with many fads (I'm looking at you, VaR), there should be concern that they will be applied blindly, without thought. Like anything else, it's possible to do Bayesian statistics wrong - and even extremely wrong - but when wielded correctly, they make for an excellent investigative resource.

New Scientist has an article on the use - and misuse - of probability in criminal cases. Naturally, it focuses on Bayesian statistics. The key point the article makes is that while it's important to consider the odds of something happening, it is just as critical to account for the odds of it happening by chance. That may seem contradictory (isn't an event's likelihood, by definition, the probability it happens by chance?) so let's use a classic example, lifted from the article:

You have just tested positive for a disease that affects 1 in every 10,000 people. The test is 99% accurate. On the surface, that sounds like a sound diagnosis, and most people would say they are 99% confident that they do, in fact, have the disease. But consider the following: if every one of the 10,000 people took the same test, then 1 of them would yield a true positive and 99 more would exhibit false positives just by chance. Therefore, among people who have tested positive, there is only a 1% chance of actually having the disease - not the 99% likelihood we naively assumed before!

How does that work - wasn't there only a 1% chance of the test being wrong? Well, yes - but if you think about it, that 1% chance of error is much larger than the 0.01% chance of having the disease in the first place and the test result must be placed in that context. For the more spatial readers, here is a picture from New Scientist:

The false positive problem is a classic textbook example of how Bayesian reasoning (that is, accounting for the ways in which chance can manifest itself) can affect a seemingly obvious result. It's a very important consideration which could be overlooked without care. And besides, it makes for interesting pop sci articles.

Leave a Comment

Previous post:

Next post: