From Certainty to Uncertainty: The Intellectual Journey of AI
"Humanity has long sought to formalize intelligence.
The first school believed that reasoning is a form of logic — crisp, complete, and unambiguous.
The second school recognized that the world is messy, knowledge is incomplete, and reasoning must therefore be probabilistic.
These two visions define the twin pillars of AI:
Formal logic (certainty → deduction → truth)
Probabilistic reasoning (uncertainty → belief → confidence)
In this part of the course, we explore how machines can believe, learn, and decide even when they do not know."
Throughout history, humans have struggled with two ways of knowing:
The clarity of logic, proof, and exact truth.
Decisions made from faith — not blind belief, but trust informed by evidence.
The First Tradition:
Built machines that know — systems of logic, rules, and deduction.
The Second Tradition:
Builds machines that believe — systems that estimate, learn, and decide under uncertainty.
Both are intelligent, but in profoundly different ways. This lecture explores how the shift from absolute to partial knowledge represents a fundamental paradigm change in how we build intelligent systems.
| Human Inquiry | Artificial Intelligence |
|---|---|
|
Knowledge The pursuit of absolute truth through reason, deduction, and formal systems. |
Logic-based AI Aims for certainty through symbolic reasoning, axioms, and proof.
Propositional Logic
First-Order Logic
Expert Systems
Examples: Theorem provers, rule-based systems, planning systems. |
|
Faith (Rational Belief) Acknowledgment of the limits of human knowledge; trust in what cannot be fully proven, but can be believed based on evidence or intuition. |
Probabilistic AI Accepts partial knowledge; acts rationally under uncertainty, guided by evidence and belief updating (Bayesian reasoning).
Bayesian Networks
Machine Learning
Sensor Fusion
Examples: Probabilistic models, ML systems, autonomous agents. |
In Bayesian Terms:
Faith is the prior — a belief you start with, which you revise as experience accumulates. "In a way, Bayesian inference is a formalization of faith refined by evidence."
Humanity has understood intelligence in two contrasting ways — as logical certainty and as reasoning under uncertainty. These two visions shaped the evolution of AI.
"Intelligence is reasoning with certainty."
| Philosophical Roots | Aristotle (logic), Descartes (rationalism), Leibniz (calculus ratiocinator), Boole (Boolean algebra), Russell & Whitehead (Principia Mathematica) — reason and truth through logic. |
| Core Belief | Knowledge is structured, formal, and complete; intelligence = correct deduction. |
| Representative AI Scholars |
John McCarthyLogic & Lisp, coined "AI"Marvin MinskySymbolic AI, framesHerbert Simon & Allen NewellProblem solving, GPSRobert KowalskiLogic programming, Prolog |
| Methods | Propositional logic, first-order logic, rule-based systems, theorem proving, automated planning, expert systems. |
| Modern Descendants | Knowledge graphs, semantic web, automated reasoning, SAT/SMT solvers, formal verification, planning systems. |
"Intelligence is reasoning under uncertainty."
| Philosophical Roots | Hume (induction), Bayes (probabilistic inference), Laplace (probability theory), de Finetti (subjective probability) — knowledge as belief updated by evidence. |
| Core Belief | Knowledge is partial; intelligence = rational action under uncertainty. |
| Representative AI Scholars |
Judea PearlBayesian networks, causalityDavid HeckermanProbabilistic expert systemsSebastian ThrunProbabilistic roboticsStuart RussellRational agents, modern AIMichael JordanProbabilistic ML, graphical modelsDaphne KollerProbabilistic graphical models |
| Methods | Probability theory, Bayesian inference, Markov models (HMMs, MDPs), decision theory, reinforcement learning, machine learning. |
| Modern Descendants | Machine learning, deep learning, Bayesian networks, probabilistic programming, autonomous systems, robotics, computer vision, NLP. |
| Mode of Thinking | Description | AI Analogue |
|---|---|---|
| Deductive Reasoning (Certainty) |
Truth is derived from axioms; valid if consistent with logic. |
Propositional / First-Order Logic Theorem proving, expert systems |
| Inductive Reasoning (Evidence & Belief) |
Truth emerges from experience and observation; belief updated with new evidence. |
Bayesian / Probabilistic AI Belief networks, ML, sensor fusion |
AI's journey from logic to probability mirrors humanity's intellectual evolution
Logic Theorist, GPS, Lisp
AI pioneers believed intelligence could be captured through symbolic manipulation and logical deduction. McCarthy, Minsky, and Newell/Simon built systems that proved theorems and solved puzzles using formal logic.
MYCIN, DENDRAL, Rule-Based AI
Rule-based systems encoded human expertise using IF-THEN rules. However, they struggled with uncertainty, noisy data, and contradictory evidence. The certainty factors in MYCIN were an early attempt to handle uncertainty.
Bayesian Networks (Judea Pearl, 1988)
Recognition that real-world AI must handle uncertainty, incomplete information, and noisy data. Pearl's Bayesian networks provided a principled framework for reasoning under uncertainty. This marked a fundamental paradigm shift.
Deep Learning, Probabilistic Programming
Modern AI is fundamentally probabilistic. Neural networks learn probability distributions, reinforcement learning handles stochastic environments, and autonomous systems use probabilistic models for perception and decision-making.
AI has progressively moved from deterministic logic to probabilistic reasoning. This isn't because logic failed — it's because the real world is inherently uncertain, and intelligent systems must embrace that uncertainty.
Symbolic AI pursues truth.
Probabilistic AI pursues confidence.
Modern AI (Russell & Norvig's "rational agent" view) seeks to combine both: systems that reason logically when they can, and act probabilistically when they must.
Example: Software verification, planning with known models
Example: Robotics, vision, NLP, medical diagnosis
"Artificial Intelligence began by trying to replicate the human ability to know — to deduce truths and prove theorems.
But the world taught us that intelligence isn't about perfect knowledge.
It's about navigating the unknown.
The greatest intelligence — human or artificial — is the ability to act rationally even when certainty is impossible.
That is the science of uncertainty — and the wisdom of faith."
The debate between logic and probability isn't just philosophical — it reveals fundamental mathematical limits on what AI can achieve.
"The pursuit of artificial general intelligence (AGI) is haunted by Gödel's ghost. His incompleteness theorems suggest that absolute AGI — perfectly general, always correct, and fully explainable — is a logical impossibility."
Kurt Gödel revolutionized mathematical logic by proving that formal systems have fundamental limits:
Any consistent formal system powerful enough to express arithmetic cannot prove all true statements within itself.
In other words: If a system is consistent (no contradictions), it must be incomplete (some truths are unprovable).
A consistent formal system cannot prove its own consistency.
In other words: If you want completeness (prove everything), you must sacrifice consistency (accept contradictions).
You cannot have both:
• A system that is complete (can solve all problems)
• AND consistent (never produces contradictions)
If we view AGI as a universal reasoning system, Gödel's theorems impose fundamental constraints:
| AGI Property | Gödel's Constraint | Implication |
|---|---|---|
| Completeness Can solve all intellectual problems |
Cannot be complete AND consistent | AGI striving for universal capability may encounter undecidable problems or contradictions |
| Soundness Never produces errors |
Complete systems must be inconsistent | Perfect reliability across all domains is impossible |
| Self-verification Can prove its own correctness |
Cannot prove its own consistency | AGI cannot fully verify its own reliability |
An AGI cannot simultaneously be:
✓ Universally capable (complete)
✓ Always correct (sound/consistent)
✓ Self-verifiable (can prove its own correctness)
Approach: Rule-based expert systems
Examples: MYCIN, DENDRAL
Approach: Deep learning, probabilistic models
Examples: Neural networks, Bayesian systems
Modern AI traded interpretability for generalization. We gained the ability to handle uncertainty and learn from data, but lost the ability to formally verify correctness. This mirrors Gödel's trade-off: you can't have completeness AND consistency.
Deep neural networks are powerful but opaque. We cannot always understand why they make certain decisions:
| System Type | Verifiability | Generality | Interpretability |
|---|---|---|---|
| Formal Logic Systems | High | Low | High |
| Deep Neural Networks | Low | High | Low |
| Hybrid Systems | Medium | Medium | Medium |
Just as we cannot prove all truths in a formal system, we cannot fully verify or explain all behaviors in a neural network. The "black box" nature of modern AI makes it even harder than formal systems to prove soundness or completeness.
While absolute AGI may be logically impossible, practical AGI remains feasible if we accept certain constraints:
AGI could excel in specific domains where completeness and soundness are manageable, rather than aiming for universal generality.
Probabilistic judgments that are "good enough" may suffice, accepting trade-offs over guaranteed truths.
Combining interpretable symbolic reasoning with probabilistic models balances generality and reliability.
Gödel showed that absolute truth systems are impossible — no system can be both complete and consistent.
Similarly, absolute AGI is a logical impossibility — no AI can be universally capable, always correct, and fully explainable.
This is why modern AI embraces uncertainty: not as a compromise, but as the only rational path forward.
Probabilistic reasoning acknowledges the limits Gödel revealed — and builds intelligence within those constraints.
Consider these questions as you continue through this lecture:
Can we ever know the world completely? Or must intelligence, human or artificial, always live with uncertainty? What are the implications of each view?
In what sense does building an AI that reasons probabilistically reflect the human experience of faith? Is this analogy helpful or misleading?
Is belief in an uncertain world a weakness — or a deeper form of intelligence? Consider: Does embracing uncertainty make AI more or less human-like?
How does your own decision-making combine logic and uncertainty? Think of a recent decision you made without complete information. What role did "belief" play?