← Curriculum Module 1 of 8
← Prev Next →

A Tale of Two Sciences

Greg Glassman's Framework: The Philosophical Split That Broke Science

The Framework: Six Concepts That Split Science in Two

Greg Glassman identifies six paired concepts where a fundamental philosophical fork in the road created two completely different approaches to science:

Each choice leads down a different path. The choices on the LEFT side created broken science. The choices on the RIGHT side lead to science that works.

🔴 The Deductivist Path (Broken)

  • Pursuit of certainty
  • Deductive logic (certain conclusions)
  • Falsification as demarcation
  • Focus on P(D|H) — p-values
  • Frequentist statistics
  • Ontological probability

Champions: Karl Popper, Ronald Fisher

Result: Science that won't replicate

🟢 The Inductivist Path (Works)

  • Management of uncertainty
  • Inductive logic (probable conclusions)
  • Predictive strength as demarcation
  • Focus on P(H|D) — what we want to know
  • Bayesian reasoning
  • Epistemic probability

Champions: E.T. Jaynes, Bayesians

Result: Science that predicts and replicates

The Fork in the Road
🔀
Deductive
Certainty
Falsification
P(D|H)
→ Broken Science
Inductive
Uncertainty
Prediction
P(H|D)
→ Science that Works

1. Psychology: Certainty vs Uncertainty

The first split is psychological — it's about how humans relate to not knowing things.

"Few would doubt the potential of uncertainty for psychological stress or that this stress would naturally vary from person to person for any given situation. What is likely far less appreciated is the differences in approach and outcome in the pursuit of certainty versus the management of uncertainty, and the enormity of the philosophical and historical consequences." — Greg Glassman

🔴 Pursuit of Certainty

Some people find uncertainty intolerable. They want to KNOW — with certainty — what is true and false.

  • Seeks definitive answers
  • Binary: true or false, proven or disproven
  • Uncomfortable with "probably" or "likely"
  • Wants to eliminate doubt

This mindset gravitates toward deductive logic — where conclusions follow with certainty from premises.

🟢 Management of Uncertainty

Others accept that uncertainty is fundamental to knowledge. The goal isn't to eliminate it, but to quantify and manage it.

  • Accepts probabilistic answers
  • Comfortable with degrees of confidence
  • "More likely" and "less likely" are meaningful
  • Updates beliefs with new evidence

This mindset gravitates toward inductive logic — where conclusions follow with probability from premises.

The psychological bias toward certainty drives people toward logical and statistical frameworks that promise certainty — even when those frameworks are inappropriate for the actual problem. This is how the wrong philosophy of science gained dominance.

2. Logic: Deduction vs Induction

The second split is about the type of logic we use to reason about the world.

"For those with the strongest yearning for certainty, deductive logic provides hope and comfort, as its conclusions come from its premises with certainty. For those keen to the fruits and hope of managing uncertainty, inductive logic holds promise and appeal, as its conclusions come from its premises with probability rather than certainty." — Greg Glassman

🔴 Deductive Logic

Conclusions follow CERTAINLY from premises

Classic Example:
Premise 1: All men are mortal
Premise 2: Socrates is a man
∴ Conclusion: Socrates is mortal (100% certain)

If the premises are true, the conclusion must be true. No probability involved.

Problem: Most real-world knowledge doesn't work this way. We rarely have premises we know with certainty.

🟢 Inductive Logic

Conclusions follow PROBABLY from premises

Classic Example:
Premise: The sun has risen every day for recorded history
∴ Conclusion: The sun will probably rise tomorrow (very high probability, but not 100%)

Evidence increases or decreases the probability of a conclusion. Conclusions are never 100% certain.

Strength: This is how real knowledge actually works. We update probabilities based on evidence.

"The deductive-inductive fork in the road is the departure point for two competing philosophies of science that ultimately divide the practice of science, statistics, probability theory, and our views of the universe." — Greg Glassman

This is where it all starts. Choose deduction → you end up with falsificationism and p-values. Choose induction → you end up with Bayesian reasoning and predictive power.

3. Philosophy: Falsification vs Predictive Strength

The third split is about what makes something "science" — how do we demarcate science from non-science?

🔴 Falsificationism (Popper)

The deductivist's commitment obligates the use of modus tollens to deny scientific assertions.

Modus Tollens:
If H, then D
Not D
∴ Not H

"If the theory is true, we'd see X. We don't see X. Therefore the theory is false."

For deductivists, falsifiability became the hallmark of a scientific assertion — a line of demarcation of science from nonsense.

Scientific models by this approach cannot be logically confirmed, only refuted.

Problem: You can never "confirm" anything. You can only fail to disprove it. This leads to weird conclusions like "we have no positive evidence for anything."

🟢 Predictive Strength

Induction recognizes the predictive strength of models as the demarcation of science from non-science.

The Test:
Does your model make accurate predictions?

If yes → it's good science
If no → it's not good science

Simple. Direct. Practical.

A model is scientific to the degree that it successfully predicts future observations.

Models CAN be confirmed — each successful prediction increases our confidence in the model.

Strength: This is how engineering and physics actually work. Predictions either work or they don't. Reality is the judge.

"Two very different sciences have emerged at this departure." — Greg Glassman
The irony: Even recognizing that there's a "replication crisis" presupposes that science SHOULD replicate — which is an implicit admission that predictive power (not just falsifiability) is what validates science. If falsification were enough, why would we care if studies replicate?

4. Probability: P(D|H) vs P(H|D)

This is the most critical distinction — and the one most people don't understand. There are two completely different probabilities, and confusing them is the heart of broken science.

P(D|H)
"The probability of the DATA, given the HYPOTHESIS"
If my hypothesis is true, how likely is this data?
P(H|D)
"The probability of the HYPOTHESIS, given the DATA"
Given this data, how likely is my hypothesis to be true?
THE CRITICAL CONFUSION:

P(D|H) is what p-values give you.
"If there's no effect, there's only a 5% chance of seeing data this extreme."

P(H|D) is what you actually want to know.
"Given this data, what's the probability my hypothesis is true?"

THESE ARE NOT THE SAME THING!

Broken science uses P(D|H) and pretends it's telling you P(H|D).
"Deductivists (Karl Popper and Ronald Fisher for example) denied a proper role in objective science for the P(H|D). For Fisher the notion would introduce subjectivity and Popper rejected it because it was inductive. For the inductivists the P(H|D) is the sole measure of scientific validation." — Greg Glassman

A Concrete Example

The Courtroom Analogy:

P(D|H) = P(Evidence | Innocent)
"If the defendant is innocent, what's the probability we'd see this evidence?"

P(H|D) = P(Innocent | Evidence)
"Given this evidence, what's the probability the defendant is innocent?"

These are completely different questions!

A prosecutor might say: "If innocent, there's only a 1% chance of having the victim's blood on your shirt."
But that doesn't mean: "There's a 99% chance you're guilty."

Maybe 1 in 1000 innocent people have blood on them for other reasons, but 999 in 1000 guilty people also have blood on them. The math is totally different.
"The inductivists use the P(D|H) to derive the P(H|D) by way of Bayes' Theorem from which they've drawn the term 'Bayesians.' Bayesians typically refer to the P(H|D) as 'the thing you really want to know.'" — Greg Glassman

5. Statistics: Frequentist vs Bayesian

The split in logic and probability leads to two completely different statistical frameworks.

🔴 Frequentist Statistics

Where nothing less than absolute certainty is standard, and deductive logic the only logic, P(D|H) is the only probability recognized.

  • Probability = long-run frequency of events
  • Only "physical" probabilities exist (dice, coins, radioactive decay)
  • The hypothesis is either true or false — not probabilistic
  • Uses p-values: P(data this extreme | null hypothesis true)
  • Tool: modus tollens — if p < 0.05, "reject null"
The frequentist believes:
"Physical" probabilities are real — the probability of a die showing 6 exists "out there" in the die itself. But the probability that a hypothesis is true? That doesn't make sense to them — the hypothesis either IS true or ISN'T.

🟢 Bayesian Statistics

For Bayesians, probability is the rational measure of our uncertainty — it exists in minds, not in objects.

  • Probability = degree of belief/confidence
  • Can assign probability to ANY proposition
  • Hypotheses have probabilities — our confidence in them
  • Uses P(H|D): probability of hypothesis given data
  • Tool: Bayes' Theorem — update beliefs with evidence
The Bayesian believes:
Probability is "epistemic" — it's about what we KNOW, not about physical systems. We can absolutely ask "what's the probability this hypothesis is true?" and update that probability as evidence comes in.

Why Frequentist Statistics Broke Science

The p-value ritual:
  1. Assume null hypothesis (no effect) is true
  2. Calculate P(data this extreme | null true)
  3. If p < 0.05, declare "significant" and reject null
  4. Publish paper claiming you found something

Problems:

  • p < 0.05 is arbitrary — why not 0.01 or 0.10?
  • p-value doesn't tell you probability hypothesis is true
  • Easy to "p-hack" — try multiple analyses until one works
  • Doesn't account for prior probability or effect size
  • Encourages binary thinking (significant/not) instead of gradations
"Frequentists believe in 'physical' probabilities — probabilities associated with random physical systems such as roulette wheels, rolling dice, and radioactive atoms. In such a system a given type of event tends to occur at a persistent rate of relative frequency in long run trials." — Greg Glassman
"The Bayesian view of probability holds that probability is the rational measure of our uncertainty. The Bayesian view of probability is called 'epistemic.'" — Greg Glassman

6. Physics: Ontological vs Epistemic Probability

The deepest level of the split is metaphysical — what IS probability, fundamentally?

🔴 Ontological Probability

Probability exists "out there" in the physical world — it's a property of objects and systems.

  • A die has a 1/6 probability of showing 6
  • A radioactive atom has a probability of decay
  • These probabilities are real physical facts
  • Probability exists independently of any observer

Champion: Einstein
"God does not play dice with the universe"

🟢 Epistemic Probability

Probability exists in minds — it's a measure of knowledge and uncertainty, not a physical property.

  • Probability = degree of rational belief
  • Can apply to any proposition, not just physical systems
  • Changes as we gain information
  • Different agents with different info can have different probabilities for the same event

Champion: Niels Bohr
Copenhagen interpretation of quantum mechanics

"The debate between ontological and epistemological probability has been ongoing for nearly two hundred years. Einstein on the side of 'ontological' probability debated Niels Bohr, an advocate of 'epistemological' probability. The Solvay Conference in Copenhagen in 1927 was an assemblage of preeminent physicists and chemists where this issue was argued and voted on. The issue is far from settled." — Greg Glassman
Why this matters for science:

If probability is ontological (physical), then you can only talk about probability for physical random systems — dice, coins, quantum events. You CAN'T meaningfully say "the probability that evolution is true is 99%."

If probability is epistemic (knowledge), then you CAN say "given all evidence, I'm 99% confident evolution is true." Probability becomes a tool for reasoning about ANY uncertain proposition.

The frequentist/ontological view crippled the ability to ask the questions we actually care about in science.

The Consequences: How This Broke Science

Now you can see the full chain of how the wrong philosophical choices led to broken science:

The Chain of Consequences:

1. Psychology: Desire for certainty → choose deduction

2. Logic: Deductive logic → conclusions must be certain

3. Philosophy: Falsification → can only disprove, never confirm

4. Probability: P(D|H) only → can't ask "how likely is H true?"

5. Statistics: Frequentist → p-values become the ritual

6. Practice: p < 0.05 = "significant" = publishable

Result: Science optimized for p-values, not prediction

Crisis: 50-90% of findings don't replicate

What Went Wrong in Practice

What Should Happen What Actually Happens
Make model, test predictions, update confidence Collect data, p-hack until p < 0.05, publish
Ask: "What's probability this model is correct?" Ask: "Is p < 0.05?" (wrong question)
Replication is the test of truth Publication is the measure of success
Failed predictions → revise or abandon model Inconvenient data → unpublished, ignored
Science = successful prediction Science = consensus + credentials + peer review

Fields Most Affected

Where the broken approach dominates:
  • Nutrition science — observational studies, confounders, industry funding
  • Psychology — small samples, p-hacking epidemic, ~64% failure to replicate
  • Sociology — political pressure, soft endpoints
  • Preclinical medicine — 75-89% failure to replicate
  • Economics — models that don't predict
Where science still works (predictive approach):
  • Physics — predictions must match reality to extreme precision
  • Chemistry — reactions either work or they don't
  • Engineering — bridges stand or fall, planes fly or crash
  • Drug development (late stage) — Phase 3 trials test if it actually works
  • Industry R&D — products must function
"The recognition of a crisis, in quotes, in non-replicating science presupposes that it should replicate, suggesting an unspoken, maybe subconscious admission of the primacy of prediction in validating scientific models." — Greg Glassman

Connecting This to Metabolism Education

Now you can see why the metabolic education we've done together is fundamentally different from conventional nutrition advice.

Conventional Nutrition = Broken Science

How nutrition "science" works:
  • Observational study: "People who eat X have more Y"
  • Get p < 0.05 → publish
  • Media reports: "Study shows X causes Y!"
  • Eventually becomes consensus, then guideline
  • Never tested by actual prediction
  • 50 years later: "Oops, turns out fat doesn't cause heart disease"

What We've Been Doing = Predictive Science

Our approach:
  • Start with biochemistry and physics (sciences that replicate)
  • Trace actual mechanisms: electrons → mitochondria → ROS
  • Predictions follow from mechanisms, not correlations
  • Each step can be tested: Does this pathway actually work this way?
  • Real-world validation: Does intervention actually improve metabolic markers?

Applying the P(H|D) Mindset

Question Broken Science Answer First Principles Answer
Is saturated fat bad? "Studies show correlation with heart disease" (p < 0.05) Trace the actual mechanism: Does saturated fat cause membrane damage? Does it increase ROS? What's the metabolic pathway?
Should I eat breakfast? "Studies show breakfast eaters are healthier" (confounded) What does fasting do to insulin? To AMPK? To fat oxidation? What's the mechanism?
Are seed oils healthy? "AHA recommends them" (industry funded consensus) What happens when PUFA integrates into membranes? What's the oxidation potential? What's the lipid peroxidation cascade?

The Ultimate Test

Bayesian question we should ask:

"Given all the evidence — biochemistry, physiology, intervention trials, ancestral patterns, clinical experience — what's the probability that [intervention X] improves metabolic health?"

NOT: "Did some study get p < 0.05?"

This is P(H|D) thinking — the thing you really want to know.

Why Our Education Replicates

Everything we've covered is built on:

These are the sciences that do replicate because they're based on mechanisms that work the same way in every human body. The ETC doesn't care about p-values — it just transfers electrons according to physics.

"Science is successful prediction, nothing more. If your model can't predict, it's not science — it's just peer-reviewed opinion." — Greg Glassman

Now you understand not just WHAT we've learned about metabolism, but WHY you can trust it — it's built on the science that replicates, not the science that broke.