Back to Lecture 10

Two Perspectives on Intelligence

From Certainty to Uncertainty: The Intellectual Journey of AI

"Humanity has long sought to formalize intelligence.

The first school believed that reasoning is a form of logic — crisp, complete, and unambiguous.

The second school recognized that the world is messy, knowledge is incomplete, and reasoning must therefore be probabilistic.

These two visions define the twin pillars of AI:

Formal logic (certainty → deduction → truth)
Probabilistic reasoning (uncertainty → belief → confidence)

In this part of the course, we explore how machines can believe, learn, and decide even when they do not know."

The Intellectual Heritage

Throughout history, humans have struggled with two ways of knowing:

One Seeks Certainty

The clarity of logic, proof, and exact truth.

  • Deductive reasoning from axioms
  • Mathematical certainty
  • Binary truth values (true/false)
  • Complete knowledge assumption

The Other Accepts Uncertainty

Decisions made from faith — not blind belief, but trust informed by evidence.

  • Inductive reasoning from observations
  • Degrees of belief (probabilities)
  • Continuous confidence values [0,1]
  • Partial knowledge acceptance
Artificial Intelligence Inherited These Two Traditions

The First Tradition:

Built machines that know — systems of logic, rules, and deduction.

The Second Tradition:

Builds machines that believe — systems that estimate, learn, and decide under uncertainty.

Key Insight

Both are intelligent, but in profoundly different ways. This lecture explores how the shift from absolute to partial knowledge represents a fundamental paradigm change in how we build intelligent systems.

The Analogy: Knowledge vs. Faith ↔ Logic vs. Uncertainty

Human Inquiry Artificial Intelligence
Knowledge
The pursuit of absolute truth through reason, deduction, and formal systems.
Logic-based AI
Aims for certainty through symbolic reasoning, axioms, and proof.
Propositional Logic First-Order Logic Expert Systems

Examples: Theorem provers, rule-based systems, planning systems.

Faith (Rational Belief)
Acknowledgment of the limits of human knowledge; trust in what cannot be fully proven, but can be believed based on evidence or intuition.
Probabilistic AI
Accepts partial knowledge; acts rationally under uncertainty, guided by evidence and belief updating (Bayesian reasoning).
Bayesian Networks Machine Learning Sensor Fusion

Examples: Probabilistic models, ML systems, autonomous agents.

Understanding "Faith" in This Context
❌ What Faith is NOT:
  • Blind belief without evidence
  • Irrational thinking
  • Ignoring facts
  • Wishful thinking
✅ What Faith IS in AI:
  • Rational acceptance of uncertainty
  • Starting with prior beliefs
  • Updating beliefs with evidence
  • Acting optimally despite incomplete info

In Bayesian Terms:

Faith is the prior — a belief you start with, which you revise as experience accumulates. "In a way, Bayesian inference is a formalization of faith refined by evidence."

Two Schools of Thought in Artificial Intelligence

Humanity has understood intelligence in two contrasting ways — as logical certainty and as reasoning under uncertainty. These two visions shaped the evolution of AI.

School 1: The Logicist / Symbolic School

"Intelligence is reasoning with certainty."

Philosophical Roots Aristotle (logic), Descartes (rationalism), Leibniz (calculus ratiocinator), Boole (Boolean algebra), Russell & Whitehead (Principia Mathematica) — reason and truth through logic.
Core Belief Knowledge is structured, formal, and complete; intelligence = correct deduction.
Representative AI Scholars
John McCarthy
Logic & Lisp, coined "AI"
Marvin Minsky
Symbolic AI, frames
Herbert Simon & Allen Newell
Problem solving, GPS
Robert Kowalski
Logic programming, Prolog
Methods Propositional logic, first-order logic, rule-based systems, theorem proving, automated planning, expert systems.
Modern Descendants Knowledge graphs, semantic web, automated reasoning, SAT/SMT solvers, formal verification, planning systems.

School 2: The Probabilistic / Uncertainty-Based School

"Intelligence is reasoning under uncertainty."

Philosophical Roots Hume (induction), Bayes (probabilistic inference), Laplace (probability theory), de Finetti (subjective probability) — knowledge as belief updated by evidence.
Core Belief Knowledge is partial; intelligence = rational action under uncertainty.
Representative AI Scholars
Judea Pearl
Bayesian networks, causality
David Heckerman
Probabilistic expert systems
Sebastian Thrun
Probabilistic robotics
Stuart Russell
Rational agents, modern AI
Michael Jordan
Probabilistic ML, graphical models
Daphne Koller
Probabilistic graphical models
Methods Probability theory, Bayesian inference, Markov models (HMMs, MDPs), decision theory, reinforcement learning, machine learning.
Modern Descendants Machine learning, deep learning, Bayesian networks, probabilistic programming, autonomous systems, robotics, computer vision, NLP.
The Intellectual Continuum
Mode of Thinking Description AI Analogue
Deductive Reasoning
(Certainty)
Truth is derived from axioms; valid if consistent with logic. Propositional / First-Order Logic
Theorem proving, expert systems
Inductive Reasoning
(Evidence & Belief)
Truth emerges from experience and observation; belief updated with new evidence. Bayesian / Probabilistic AI
Belief networks, ML, sensor fusion

Historical Evolution of AI Approaches

AI's journey from logic to probability mirrors humanity's intellectual evolution

1950s-1960s: The Logic Era

Logic Theorist, GPS, Lisp

AI pioneers believed intelligence could be captured through symbolic manipulation and logical deduction. McCarthy, Minsky, and Newell/Simon built systems that proved theorems and solved puzzles using formal logic.

1970s-1980s: Expert Systems

MYCIN, DENDRAL, Rule-Based AI

Rule-based systems encoded human expertise using IF-THEN rules. However, they struggled with uncertainty, noisy data, and contradictory evidence. The certainty factors in MYCIN were an early attempt to handle uncertainty.

1980s-1990s: Probabilistic Revolution

Bayesian Networks (Judea Pearl, 1988)

Recognition that real-world AI must handle uncertainty, incomplete information, and noisy data. Pearl's Bayesian networks provided a principled framework for reasoning under uncertainty. This marked a fundamental paradigm shift.

2000s-Present: Machine Learning Era

Deep Learning, Probabilistic Programming

Modern AI is fundamentally probabilistic. Neural networks learn probability distributions, reinforcement learning handles stochastic environments, and autonomous systems use probabilistic models for perception and decision-making.

The Trend is Clear

AI has progressively moved from deterministic logic to probabilistic reasoning. This isn't because logic failed — it's because the real world is inherently uncertain, and intelligent systems must embrace that uncertainty.

Unified View: Modern AI

Symbolic AI pursues truth.

Probabilistic AI pursues confidence.

Modern AI (Russell & Norvig's "rational agent" view) seeks to combine both: systems that reason logically when they can, and act probabilistically when they must.

Complementary, Not Competing
When Logic Excels:
  • Complete information available
  • Deterministic environments
  • Formal verification needed
  • Guaranteed correctness required

Example: Software verification, planning with known models

When Probability Excels:
  • Incomplete information
  • Noisy sensors
  • Stochastic environments
  • Learning from data

Example: Robotics, vision, NLP, medical diagnosis

The Journey: From Knowledge to Belief

"Artificial Intelligence began by trying to replicate the human ability to know — to deduce truths and prove theorems.

But the world taught us that intelligence isn't about perfect knowledge.

It's about navigating the unknown.

The greatest intelligence — human or artificial — is the ability to act rationally even when certainty is impossible.

That is the science of uncertainty — and the wisdom of faith."

The debate between logic and probability isn't just philosophical — it reveals fundamental mathematical limits on what AI can achieve.

"The pursuit of artificial general intelligence (AGI) is haunted by Gödel's ghost. His incompleteness theorems suggest that absolute AGI — perfectly general, always correct, and fully explainable — is a logical impossibility."

Gödel's Incompleteness Theorems (1931)

Kurt Gödel revolutionized mathematical logic by proving that formal systems have fundamental limits:

First Incompleteness Theorem

Any consistent formal system powerful enough to express arithmetic cannot prove all true statements within itself.

In other words: If a system is consistent (no contradictions), it must be incomplete (some truths are unprovable).

Second Incompleteness Theorem

A consistent formal system cannot prove its own consistency.

In other words: If you want completeness (prove everything), you must sacrifice consistency (accept contradictions).

The Fundamental Trade-off

You cannot have both:
• A system that is complete (can solve all problems)
• AND consistent (never produces contradictions)

The Gödelian Limits of AGI

If we view AGI as a universal reasoning system, Gödel's theorems impose fundamental constraints:

AGI Property Gödel's Constraint Implication
Completeness
Can solve all intellectual problems
Cannot be complete AND consistent AGI striving for universal capability may encounter undecidable problems or contradictions
Soundness
Never produces errors
Complete systems must be inconsistent Perfect reliability across all domains is impossible
Self-verification
Can prove its own correctness
Cannot prove its own consistency AGI cannot fully verify its own reliability
The Gödelian Dilemma for AGI

An AGI cannot simultaneously be:
Universally capable (complete)
Always correct (sound/consistent)
Self-verifiable (can prove its own correctness)

From Determinism to Uncertainty: The Modern AI Response
Early AI: Deterministic Logic

Approach: Rule-based expert systems

  • ✅ Strengths: Precise, explainable, verifiable
  • ❌ Weaknesses: Brittle, narrow domains, can't handle ambiguity

Examples: MYCIN, DENDRAL

Modern AI: Uncertainty-Based Models

Approach: Deep learning, probabilistic models

  • ✅ Strengths: Generalizes, handles noise, learns from data
  • ❌ Weaknesses: Uninterpretable, "black box", hard to verify

Examples: Neural networks, Bayesian systems

The Trade-off

Modern AI traded interpretability for generalization. We gained the ability to handle uncertainty and learn from data, but lost the ability to formally verify correctness. This mirrors Gödel's trade-off: you can't have completeness AND consistency.

The Black Box Problem

Deep neural networks are powerful but opaque. We cannot always understand why they make certain decisions:

System Type Verifiability Generality Interpretability
Formal Logic Systems High Low High
Deep Neural Networks Low High Low
Hybrid Systems Medium Medium Medium
The Parallel to Gödel

Just as we cannot prove all truths in a formal system, we cannot fully verify or explain all behaviors in a neural network. The "black box" nature of modern AI makes it even harder than formal systems to prove soundness or completeness.

Practical AGI: Achievable Under Constraints

While absolute AGI may be logically impossible, practical AGI remains feasible if we accept certain constraints:

1. Domain Constraints

AGI could excel in specific domains where completeness and soundness are manageable, rather than aiming for universal generality.

2. Approximate Correctness

Probabilistic judgments that are "good enough" may suffice, accepting trade-offs over guaranteed truths.

3. Hybrid Approaches

Combining interpretable symbolic reasoning with probabilistic models balances generality and reliability.

The Fundamental Lesson

Gödel showed that absolute truth systems are impossible — no system can be both complete and consistent.

Similarly, absolute AGI is a logical impossibility — no AI can be universally capable, always correct, and fully explainable.

This is why modern AI embraces uncertainty: not as a compromise, but as the only rational path forward.

Probabilistic reasoning acknowledges the limits Gödel revealed — and builds intelligence within those constraints.

References for Further Reading
  • Gödel, K. (1931). "On Formally Undecidable Propositions of Principia Mathematica and Related Systems"
  • Russell & Norvig (2010). "Artificial Intelligence: A Modern Approach" - Chapter 12
  • Goodfellow et al. (2016). "Deep Learning" - On neural network limitations
  • Garcez et al. (2015). "Neural-Symbolic Learning Systems" - Hybrid approaches

Consider these questions as you continue through this lecture:

💭 Questions to Ponder
Question 1

Can we ever know the world completely? Or must intelligence, human or artificial, always live with uncertainty? What are the implications of each view?

Question 2

In what sense does building an AI that reasons probabilistically reflect the human experience of faith? Is this analogy helpful or misleading?

Question 3

Is belief in an uncertain world a weakness — or a deeper form of intelligence? Consider: Does embracing uncertainty make AI more or less human-like?

Question 4

How does your own decision-making combine logic and uncertainty? Think of a recent decision you made without complete information. What role did "belief" play?

Key Takeaways

Understanding the Paradigm Shift
  • AI has two intellectual traditions: logic-based and probabilistic
  • Neither is "better" — they address different types of problems
  • Modern AI increasingly embraces uncertainty
  • The shift mirrors humanity's intellectual evolution
The Nature of Intelligence
  • Intelligence isn't just about knowing — it's about acting rationally under uncertainty
  • Probabilistic reasoning is more robust in real-world scenarios
  • Reasoning with incomplete information is sophisticated intelligence
  • Uncertainty is a feature, not a bug
Next: Learn why formal logic alone fails in real-world domains and why AI needs probability theory! Continue to Topic 2 →