← Curriculum Optional Deep Dive

Why The World Is Blind

The system that keeps broken science in place — and why almost nobody sees it

The Iceberg: What You See vs What's Hidden

👁️ WHAT MOST PEOPLE SEE

"Scientists say..." • "Studies show..." • "Experts recommend..."

Trusted authorities giving us truth

🔴 THE P-VALUE RITUAL

p < 0.05 doesn't mean what people think • Wrong question being answered • P(D|H) ≠ P(H|D)

🔴 PUBLISH OR PERISH

Careers depend on publications • Positive results get published • Negative results buried

🔴 INDUSTRY FUNDING

Big Food, Big Pharma, Big Soda fund research • 4-8x more likely to favor funder • Conflicts of interest hidden

🔴 REGULATORY CAPTURE

FDA, AHA, ADA staffed by industry • Guidelines written by conflicted parties • Revolving door

🔴 PEER REVIEW THEATER

Unpaid volunteers • No data verification • Political gatekeeping • Easy to game

🔴 EDUCATION GAP

Scientists not taught philosophy of science • Don't understand their own statistics • Doctors get ~20 hours of nutrition

🔴 MEDIA AMPLIFICATION

Sensational headlines sell • No understanding of statistics • "New study says!" without context

🔴 PARADIGM LOCK-IN

Careers built on old paradigms • Grants tied to consensus • Challenging orthodoxy = career suicide

You've been seeing the tip. Now let's go beneath the surface.

How We Got Here: A Brief History

1920s-1930s

Ronald Fisher develops significance testing. Creates p-values, null hypothesis testing, and the arbitrary 0.05 threshold. Meant as a rough guide, not a rigid rule. Also: Fisher was funded by tobacco companies and denied the smoking-cancer link.

1934

Karl Popper publishes "The Logic of Scientific Discovery." Promotes falsificationism — science can only disprove, never confirm. Hugely influential. Creates philosophical justification for ignoring P(H|D).

1950s

Frequentist statistics becomes institutionalized. Universities adopt it, textbooks teach it, journals require it. Becomes "the way science is done." Bayesian approaches marginalized as "subjective."

1960s

Sugar industry pays Harvard scientists to publish review blaming fat for heart disease, exonerating sugar. Shapes dietary guidelines for 50 years. Not revealed until 2016.

1977

First US Dietary Guidelines recommend low-fat diet. Based on flawed science. Mark Hegsted, who helped draft them, was paid by sugar industry. Obesity epidemic begins.

1980s-1990s

"Publish or perish" intensifies. Academic careers become dependent on publication count and journal impact factors. Creates massive pressure to produce "significant" results.

2005

John Ioannidis publishes "Why Most Published Research Findings Are False." Becomes the most-cited medical paper ever. Describes exactly how broken the system is. Nothing changes.

2011-2012

Replication crisis begins. Bayer, Amgen try to replicate key studies — 75-89% fail. Psychology replication project shows 64% failure. The crisis is now undeniable.

2015-Present

Slow awakening. Some journals change policies. Some researchers speak out. But the fundamental structure remains unchanged. Too many careers and too much money tied to the old system.

The key insight: This wasn't a conspiracy — it was a series of well-intentioned decisions that created terrible incentives. Fisher didn't mean for p < 0.05 to become a ritual. Popper didn't want to break science. But the structures they created were captured and corrupted over decades.

The Incentives: Why Everyone Plays Along

The system persists because everyone's incentives are aligned to keep it going — even if no one is intentionally corrupt.

👨‍🔬 RESEARCHERS

  • Need publications to keep jobs
  • Positive results publish easier
  • Grants require "significant" findings
  • Challenging consensus = career risk
  • Trained in p-values, not Bayesian thinking

📚 JOURNALS

  • Want exciting, novel findings
  • "No effect found" doesn't sell
  • Impact factor depends on citations
  • Sensational results get cited more
  • Replication studies are "boring"

🏫 UNIVERSITIES

  • Rankings based on publications
  • Grant money = prestige
  • Hire/promote based on publication count
  • Don't teach philosophy of science
  • Statistics taught as ritual, not understanding

💰 FUNDERS (NIH, Industry)

  • Want "discoveries" to justify budgets
  • Industry wants favorable results
  • Negative results = "waste of money"
  • No incentive to fund replication
  • Conflicts of interest poorly managed

📺 MEDIA

  • Sensational headlines get clicks
  • Journalists don't understand statistics
  • "New study says!" = easy content
  • Nuance doesn't sell
  • No follow-up when studies don't replicate

👥 THE PUBLIC

  • Trusts "science" and "experts"
  • Doesn't understand statistics
  • Wants simple answers
  • Doesn't see retractions
  • Can't evaluate primary sources

The Vicious Cycle

Researcher needs publication P-hacks to p < 0.05 Journal publishes "finding" Media amplifies Public believes Guidelines change Researcher gets grant 🔄 Repeat
No one has to be evil. The researcher might genuinely believe their finding. The journal editor might think they're advancing science. The journalist might think they're informing the public. But the structure of incentives produces garbage regardless of intentions.

Follow the Money

Industry funding creates systematic bias — not because everyone is corrupt, but because the system filters for favorable results.

4-8×
Industry-funded studies more likely to favor funder
$30B+
Pharma spends on research annually
96
Health orgs found taking soda money

How Industry Funding Creates Bias

🔬 The Funding Filter

  1. Fund many studies on your product
  2. Some show positive results (by chance or design)
  3. Publish only favorable ones — "file drawer" problem
  4. Unfavorable studies never see light of day
  5. Published literature now biased toward your product
  6. Meta-analyses of published literature show "benefit"
  7. Guidelines committee cites meta-analyses
  8. Your product becomes "evidence-based"!

Real Examples

🥤 Big Soda

Coca-Cola $1.5M+ Global Energy Balance Network

Promoted message: "Exercise matters more than diet" — funded in secret until exposed in 2015

🍬 Sugar Industry

Sugar Research Foundation $50K (1967) Harvard Scientists

Result: Published review blaming FAT for heart disease. Shaped dietary policy for 50 years.

💊 Pharmaceutical

Pharma Companies $$$ Key Opinion Leaders Guidelines Committees

Doctors who speak favorably get paid. Then they write treatment guidelines. Legal but corrupt.

🌾 Grain/Cereal Industry

Kellogg's, General Mills Research funding Breakfast Research

"Breakfast is the most important meal!" — convenient for companies selling breakfast cereals.

The meta-problem: When you look for industry funding bias, who funds THAT research? The system is self-protective. Researchers who expose corruption don't get industry grants.

Institutional Capture: The Revolving Door

The organizations we trust to protect us are often staffed by — and funded by — the industries they're supposed to regulate.

The American Heart Association (AHA)

  • Receives millions from food industry
  • "Heart-Check" certification program — paid by companies
  • Certified products include sugary cereals, low-fat cookies
  • Pushed seed oils over saturated fat (Procter & Gamble was early funder)
  • Guidelines committee members have industry ties

The FDA

  • 75% of drug review budget comes from pharmaceutical companies (user fees)
  • Revolving door: FDA officials become pharma executives and vice versa
  • Accelerated approval pathways pushed by industry
  • Post-market safety monitoring is weak
  • "Regulatory capture" — agency serves industry it's supposed to regulate

The USDA / Dietary Guidelines

  • USDA's mission: Promote American agriculture AND provide nutrition advice
  • Inherent conflict of interest
  • Food pyramid/MyPlate designed with industry input
  • Guidelines committee members often have industry ties
  • Grain, dairy lobbies heavily influence recommendations

Academic Medical Centers

  • Depend on industry funding for research
  • Professors consult for pharma companies
  • "Key Opinion Leaders" paid to promote drugs
  • Ghost-written papers published under academic names
  • Curriculum influenced by industry relationships
"It is difficult to get a man to understand something when his salary depends upon his not understanding it." — Upton Sinclair

How Capture Works

Mechanism How It Works
Revolving Door Industry executives become regulators, then return to industry with connections
Funding Dependency Agencies depend on industry fees; don't bite the hand that feeds
Information Asymmetry Industry has more data and experts than regulators
Lobbying Industry spends billions influencing policy directly
Career Incentives Tough regulators don't get industry jobs later

The Reversals: Things "Science" Got Completely Wrong

These aren't fringe theories — these are things that were official recommendations, taught in medical schools, and followed by millions.

❌ "Dietary fat causes heart disease"

Pushed for 50 years. Now: Saturated fat link to heart disease not supported by evidence. 2020 meta-analysis: no association.

❌ "Cholesterol in food raises blood cholesterol"

Eggs demonized for decades. 2015: Dietary guidelines quietly dropped cholesterol limit. "Cholesterol is no longer a nutrient of concern."

❌ "Margarine is healthier than butter"

Promoted for decades. Trans fats in margarine now known to be far worse than butter ever was. Quietly reversed.

❌ "Hormone Replacement Therapy prevents heart disease"

Given to millions of women. 2002 WHI trial: Actually INCREASES heart disease, stroke, breast cancer. Stopped early.

❌ "Babies should sleep on their stomachs"

Official advice for decades. Reversed in 1990s — stomach sleeping increases SIDS. Thousands of preventable deaths.

❌ "Ulcers caused by stress and spicy food"

Dogma for decades. 1982: Barry Marshall proves H. pylori bacteria cause ulcers. Ignored for years, finally accepted. Nobel Prize 2005.

❌ "Low-fat diets for weight loss"

Official advice since 1977. Obesity rate tripled. Now: Low-carb diets often more effective. Quietly being walked back.

❌ "Routine episiotomy during childbirth"

Standard practice for decades. Now known to cause more harm than benefit. Rates finally declining.

❌ "Strict bed rest for back pain"

Standard treatment for decades. Now: Movement is better. Bed rest makes it worse.

❌ "Arthroscopic surgery for knee osteoarthritis"

Millions of surgeries. 2002 RCT: No better than sham surgery. Still performed widely.

The pattern: Each of these was "evidence-based," peer-reviewed, recommended by experts, taught in medical schools. Each turned out to be wrong — sometimes deadly wrong. How many current recommendations will be reversed in 20 years?
The dangerous assumption: "But surely we've fixed these problems now. Modern science is better."

The same structures that created these errors still exist. The incentives haven't changed. The statistical methods haven't changed. Why would we expect different results?

Why People Don't See It

It's not stupidity. There are specific psychological and structural reasons why this remains invisible to most people.

1. Trust in Authority

We're taught from childhood: doctors know best, scientists are objective, experts can be trusted. Questioning them feels wrong — even dangerous. "Are you smarter than scientists?"

Reality: Scientists are humans with mortgages, careers, and biases. The system they operate in is broken. Trusting the institution ≠ trusting individual findings.

2. Complexity as Shield

Statistics is confusing. Most people can't evaluate primary research. So they defer to experts.

Reality: You don't need to understand every statistical method. You need to understand that P(D|H) ≠ P(H|D) and that incentives matter. That's enough to be appropriately skeptical.

3. No One Teaches This

  • Schools don't teach philosophy of science
  • Statistics courses teach formulas, not understanding
  • Medical schools: ~20 hours of nutrition in 4 years
  • Scientists often don't understand their own methods

Reality: This is fixable. You just learned more about statistical reasoning in an hour than most doctors learn in training.

4. It's Uncomfortable

If science is broken, what CAN you trust? It's easier to believe the system works than to accept uncertainty.

Reality: Not all science is broken. Physics, chemistry, engineering still work. First-principles thinking still works. You just need to know which fields have the problem.

5. Sunk Cost

People have followed low-fat diets for 30 years. Doctors have prescribed statins for decades. Admitting it was wrong means admitting wasted effort and potential harm caused.

Reality: The best time to change was 20 years ago. The second best time is now. Sunk costs are sunk.

6. "Conspiracy Theory" Framing

Questioning mainstream science sounds like anti-vax, flat earth, etc. People don't want to be grouped with cranks.

Reality: This isn't conspiracy theory — it's published, documented, acknowledged by scientists themselves. Ioannidis's paper on false research is the MOST CITED medical paper ever. The replication crisis is in Nature, Science, JAMA.

7. The Firehose

A new "study says" every day. Impossible to evaluate each one. Easier to trust the system than to think critically about everything.

Reality: You don't need to evaluate everything. You need mental models: Who funded it? What's the mechanism? Does it replicate? Is it first-principles or correlation?
"The greatest obstacle to discovery is not ignorance — it is the illusion of knowledge." — Daniel Boorstin

What Now? How to Navigate This

You can't fix the system alone. But you can protect yourself and make better decisions.

Mental Models to Adopt

1. Ask "What's the mechanism?"
Correlation without mechanism is weak evidence. If someone can't explain HOW something works at a biological level, be skeptical.
2. Ask "Who funded it?"
Industry-funded research is 4-8x more likely to favor the funder. Check the disclosures. Follow the money.
3. Ask "Does it replicate?"
Single studies mean almost nothing. Has this been reproduced by independent researchers? Meta-analyses of replicated findings carry weight.
4. Trust fields with accountability
Physics, chemistry, engineering have real-world tests. The bridge stands or falls. Nutrition, psychology don't have that. Weigh evidence accordingly.
5. Run your own N=1 experiments
Try things. Measure results. Your body is the ultimate test. If low-carb makes you feel better and improves your markers, that matters more than any study.
6. Look for skin in the game
Does the person recommending something bear consequences if they're wrong? Advisors without skin in the game have different incentives than you do.
7. Prefer old wisdom that survived
Humans survived for millennia without seed oils and refined carbs. Ancestral patterns have been tested by time. "New discovery" has not.

What You Now Understand That Most Don't

Concept What Most Think What You Now Know
P-values Measure probability finding is true Wrong question; P(D|H) ≠ P(H|D)
Peer review Rigorous verification Unpaid volunteers, no data check
"Studies show" Reliable evidence 50-90% don't replicate
Expert consensus Reliable truth Often wrong, slow to change
Dietary guidelines Based on solid science Industry-influenced, many reversals
Medical research Objective, unbiased Funding bias, publication bias

The Positive Reframe

You're not helpless.

Yes, the system is broken. But:

• First-principles reasoning still works
• Biochemistry and physics still replicate
• Your own experiments on your own body still work
• You can evaluate evidence better than most doctors
• You understand why bad advice persists

You've escaped the Matrix. Most people never do.
"Science is successful prediction, nothing more. If your model can't predict, it's not science — it's just peer-reviewed opinion." — Greg Glassman

Now you see what most people don't. Use it wisely. Help others see it too. But remember: most people aren't ready, and that's okay. Focus on your own health, your own understanding, your own experiments. The truth will spread slowly, one person at a time.