Your updated belief = How well the data fits your hypothesis × Your prior belief, normalized by the total probability of seeing that data.
The Intuition: What Bayes' Theorem Does
PRIOR
What you believed before
+
NEW DATA
Evidence you observed
↓
POSTERIOR
What you believe now
Bayes' Theorem is a belief updating machine. It tells you exactly how to rationally update your beliefs when you get new evidence.
The Core Insight
🎯 How Much Should Evidence Move Your Belief?
It depends on two things:
1
How likely is this evidence if your hypothesis is TRUE?
P(D|H) — the likelihood
2
How likely is this evidence if your hypothesis is FALSE?
P(D|¬H) — the false positive rate
The ratio of these determines how much the evidence should update your belief:
Likelihood Ratio = P(D|H) / P(D|¬H)
"How much more likely is this evidence if H is true vs if H is false?"
Three Scenarios
Likelihood Ratio > 1: Evidence is more likely if H is true → Belief in H goes UP
Likelihood Ratio = 1: Evidence equally likely either way → Belief unchanged
Likelihood Ratio < 1: Evidence is more likely if H is false → Belief in H goes DOWN
Why the Prior Matters
🔮 Same Evidence, Different Priors → Different Conclusions
Scenario: A medical test with 95% accuracy comes back positive.
Context
Prior P(Disease)
Posterior P(Disease|Positive)
Random person, rare disease
0.1%
~2%
Person with symptoms
10%
~68%
Family history + symptoms
50%
~95%
Same test, same result — but vastly different conclusions based on prior probability!
This is why context matters. A positive test means something very different for a symptomatic patient vs a random screening.
The Bayesian Mindset:
1. Start with your best estimate (prior)
2. Observe evidence
3. Ask: How much more/less likely is this evidence if my hypothesis is true?
4. Update your belief proportionally
5. Your new belief (posterior) becomes the prior for the next piece of evidence
This is rational belief updating — exactly what science SHOULD do.
Example: Medical Test (Step by Step)
📋 The Scenario
You take a test for a disease. Here are the facts:
Disease prevalence: 1% of people have this disease
Test sensitivity: 90% — if you HAVE the disease, test is positive 90% of the time
Test specificity: 95% — if you DON'T have the disease, test is negative 95% of the time (5% false positive)
You test POSITIVE. What's the probability you actually have the disease?
Step-by-Step Calculation
1
Identify the terms:
P(H) = P(Disease) = 0.01 — Prior (1% have disease)
Dark clouds are 3× more likely when it's going to rain vs when it isn't.
This tells you clouds are decent evidence for rain (ratio > 1), but not overwhelming. If clouds were 10× more likely with rain, your belief would update more dramatically.
Example: Email Spam Filter
This is how Bayesian spam filters actually work!
📧 The Scenario
An email contains the word "FREE". Is it spam?
Base rate: 40% of all emails are spam
P("FREE" | Spam): 70% — spam emails often contain "FREE"
P("FREE" | Not Spam): 10% — legitimate emails sometimes say "FREE"
• Each piece of evidence moves your belief in the direction it supports
• Heads moves you toward "biased", tails moves you toward "fair"
• You never reach 0% or 100% — you just get more/less confident
• With enough evidence, you'd eventually figure out the truth
This Is How Science SHOULD Work
Bayesian Science:
1. Start with prior beliefs based on existing knowledge
2. Run experiment, collect data
3. Update beliefs based on how much the data supports/refutes hypotheses
4. New posterior = starting point for next experiment
5. Over time, converge toward truth
Instead, broken science asks: "Is p < 0.05? Yes? Publish. Done."
No priors, no updating, no convergence — just binary "significant" or "not."
Bayes' Theorem Calculator
Try it yourself! Enter values to calculate P(H|D).