Beliefs and Uncertainty: A Bayesian Primer

Those who follow discussions regarding Global Warming and Climate Change have heard from time to time about the Bayes Theorem. And Bayes is quite topical in many aspects of modern society:

Bayesian statistics “are rippling through everything from physics to cancer research, ecology to psychology,” The New York Times reports. Physicists have proposed Bayesian interpretations of quantum mechanics and Bayesian defenses of string and multiverse theories. Philosophers assert that science as a whole can be viewed as a Bayesian process, and that Bayes can distinguish science from pseudoscience more precisely than falsification, the method popularized by Karl Popper.

Named after its inventor, the 18th-century Presbyterian minister Thomas Bayes, Bayes’ theorem is a method for calculating the validity of beliefs (hypotheses, claims, propositions) based on the best available evidence (observations, data, information). Here’s the most dumbed-down description: Initial belief plus new evidence = new and improved belief.   (A fuller and more technical description is below for the more mathematically inclined.)

Now that doesn’t sound so special, but in fact as you will see below, our intuition about probabilities is often misleading. Consider the classic Monty Hall Problem.

The Monty Hall Game is a counter-intuitive statistics puzzle:

There are 3 doors, behind which are two goats and a car.
You pick a door (call it door A). You’re hoping for the car of course.
Monty Hall, the game show host, examines the other doors (B & C) and always opens one of them with a goat (Both doors might have goats; he’ll randomly pick one to open)
Here’s the game: Do you stick with door A (original guess) or switch to the other unopened door? Does it matter?

Surprisingly, the odds aren’t 50-50. If you switch doors you’ll win 2/3 of the time!

Don’t believe it? There’s a Monty Hall game (here) where you can prove it to yourself by experience that your success doubles when you change your choice after Monty eliminates one of the doors. Run the game 100 times either keeping your choice or changing it, and see the result.

The game is really about re-evaluating your decisions as new information emerges. There’s another example regarding race horses here.

The Principle Underlying Bayes Theorem

Like any tool, Bayes method of inference is a two-edged sword, explored in an article by John Horgon in Scientific American (here):
“Bayes’s Theorem: What’s the Big Deal?
Bayes’s theorem, touted as a powerful method for generating knowledge, can also be used to promote superstition and pseudoscience”

Here is my more general statement of that principle: The plausibility of your belief depends on the degree to which your belief–and only your belief–explains the evidence for it. The more alternative explanations there are for the evidence, the less plausible your belief is. That, to me, is the essence of Bayes’ theorem.

“Alternative explanations” can encompass many things. Your evidence might be erroneous, skewed by a malfunctioning instrument, faulty analysis, confirmation bias, even fraud. Your evidence might be sound but explicable by many beliefs, or hypotheses, other than yours.

In other words, there’s nothing magical about Bayes’ theorem. It boils down to the truism that your belief is only as valid as its evidence. If you have good evidence, Bayes’ theorem can yield good results. If your evidence is flimsy, Bayes’ theorem won’t be of much use. Garbage in, garbage out.

Embedded in Bayes’ theorem is a moral message: If you aren’t scrupulous in seeking alternative explanations for your evidence, the evidence will just confirm what you already believe. Scientists often fail to heed this dictum, which helps explains why so many scientific claims turn out to be erroneous. Bayesians claim that their methods can help scientists overcome confirmation bias and produce more reliable results, but I have my doubts.

Horgon’s statement comes very close to the legal test articulated by Bradford Hill and widely used by courts to determine causation of liability in relation to products, medical treatments or working conditions.

By way of context Bradford Hill says this:

None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question – is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?

Such is the legal terminology for the “null” hypothesis: As long as there is another equally or more likely explanation for the set of facts, the claimed causation is unproven.  For more see the post: Claim: Fossil Fuels Cause Global Warming

Limitations of Bayesian Statistics

From the above it should be clear that Bayesian inferences can be drawn when there are definite outcomes of interest and historical evidence of conditions that are predictive of one outcome or another. For example, my home weather sensor from Oregon Scientific predicts rain whenever air pressure drops significantly because that forecast will be accurate 75% of the time, based on that one condition. The Weather Network will add several other variables and will increase the probability, though maybe not always in predicting the outcomes in my backyard.

When it comes to the response of GMT (Global Mean Temperatures) to increasing CO2 concentrations, or many other climate concerns, we currently lack the historical probabilities because we have yet to untangle the long-term secular trends from the noise of ongoing, normal and natural variability.

Andrew Gelman writes on Bayes statistical methods and says this:

In short, I think Bayesian methods are a great way to do inference within a model, but not in general a good way to assess the probability that a model or hypothesis is true (indeed, I think ‘the probability that a model or a hypothesis is true’ is generally a meaningless statement except as noted in certain narrow albeit important examples).

A Fuller (more technical) Description of Bayes Theorem

The probability that a belief is true given new evidence
equals
the probability that the belief is true regardless of that evidence
times
the probability that the evidence is true given that the belief is true
divided by
the probability that the evidence is true regardless of whether the belief is true.
Got that?

The basic mathematical formula takes this form: P(B|E) = P(B) * P(E|B) / P(E), with P standing for probability, B for belief and E for evidence. P(B) is the probability that B is true, and P(E) is the probability that E is true. P(B|E) means the probability of B if E is true, and P(E|B) is the probability of E if B is true.

The application above shows some important facts to remember about Beliefs and Uncertainties:

Tests are not the event. We have a cancer test, separate from the event of actually having cancer. We have a test for spam, separate from the event of actually having a spam message.

Tests are flawed. Tests detect things that don’t exist (false positive), and miss things that do exist (false negative).

Tests give us test probabilities, not the real probabilities. People often consider the test results directly, without considering the errors in the tests.

False positives skew results. Suppose you are searching for something really rare (1 in a million). Even with a good test, it’s likely that a positive result is really a false positive on somebody in the 999,999.

Advertisements

9 comments

  1. hunter · May 23

    For years I have agreed with the assertion that the popularity of the climate obsession is a symptom of a larger problem. This focus on the devolution from Popper to what is in effect neo-Bayesian science is a good perspective that helps clarify the larger issue.

    Like

    • Ron Clutz · May 23

      Yes hunter, the way William Briggs puts it: “A feeling is not a probability.” Or his other famous saying: “Love of theory is the root of all evil.”
      The irony is that Bayes method actually requires us to change our minds when confronted with new evidence. The problem is our human survival instinct is to form patterns to identify dangers and opportunities in our environment, and we are psychologically and emotionally committed to retaining things that have worked for us in the past. It is all too human to discount contrary evidence because it is too much work to shift our mental models.

      Like

      • hunter · May 23

        Ron,
        Well stated- you outline one of the important reasons that extreme caution in writing off Popper/classic scientific methods is justified. Our propensity is to fall into faith-based/confirmation bias.
        The most extreme example of this has been the claim by some in the climate-obsessed community was to the effect that “nature has a “progressive” bias” “. That sort of assertion fails on so many levels. Yet our President and many of those in power elsewhere parrot a version of that sort of tripe regarding that claim frequently.

        Like

  2. manicbeancounter · May 24

    Reblogged this on ManicBeancounter and commented:
    Ron Clutz’s introduction, based on a Scientific American article by John Horgan on January 4, 2016, starts to grapple with the issues involved.
    With respect to the question of whether global warming is human caused, there is basically a combination of three elements – (i) Human caused (ii) Naturally caused (iii) Random chaotic variation. There may be a number of sub-elements and an infinite number of combinations including some elements counteracting others, such as El Nino events counteracting underlying warming. Evaluation of new evidence is in the context of explanations being arrived at within a community of climatologists with strong shared beliefs that at least 100% of recent warming is due to human GHG emissions. It is that same community who also decide the measurement techniques for assessing the temperature data; the relevant time frames; and the categorization of the new data. With complex decisions the only clear decision criteria is conformity to the existing consensus conclusions. As a result, the original Bayesian estimates become virtually impervious to new perspectives or evidence that contradicts those original estimates.

    Like

    • Steven Mosher · July 7

      ” Evaluation of new evidence is in the context of explanations being arrived at within a community of climatologists with strong shared beliefs that at least 100% of recent warming is due to human GHG emissions. It is that same community who also decide the measurement techniques for assessing the temperature data; the relevant time frames; and the categorization of the new data.”

      not really.
      a good number of skeptics tried to look at global temperatures. they got the same answer
      Time frames? we looked at all the data back to the begining of instrumentation

      here is the problem.

      Suppose you have the popper religion ( its a religion, yes)
      If you believe in Popper then you should go out and do your own falsification.
      Guess what… skeptics wont.

      Suppose you have the Bayes relegion.. same thing

      Bottom line. Skeptics who have actually done the work quickly drop it.

      other skeptics? they wont try or cant try.

      bad practice if they are popperians, and bad if they are baysian.

      If fact skepticism isnt science.

      Like

      • Ron Clutz · July 7

        Wow! Throw out Popper. Throw out Bayes. Throw out skepticism.
        What’s left? Post-normal Science: Reality is what we all agree it is. (Well, “all” means those in our tribe, the others don’t count.)

        Liked by 1 person

  3. hunter · May 25

    MB,
    Great summary of how self-selection and motivated thinking can combine to corrupt policy and science.

    Like

  4. Pingback: Weekly Climate and Energy News Roundup #232 | Watts Up With That?
  5. Pingback: How strong is the evidence for human-caused global warming? | ManicBeancounter

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s