Back to Lecture 10

Uncertainty as a New Paradigm

Embracing Uncertainty in Intelligent Systems

From Knowing to Believing: A New Way to Build Intelligence

The Paradigm Shift

We've seen why logic fails in real-world scenarios. Now we explore a fundamentally different approach to building intelligent systems — one that embraces uncertainty rather than avoiding it.

This isn't a compromise or workaround. It's a recognition that uncertainty is fundamental to intelligence itself.

The Conceptual Shift
Aspect Logic-Based Paradigm Uncertainty-Based Paradigm
World Model Single State
Agent knows exactly which state it's in
Belief State
Agent maintains probability over possible states
Observability Fully Observable
Agent sees everything perfectly
Partial Observability
Agent has limited, noisy observations
Decision Making Deterministic
If conditions met, action guaranteed to work
Probabilistic
Choose action maximizing expected utility
Reasoning Deduction
Prove what must be true
Inference
Estimate what's likely true
Core Insight

Uncertainty-based AI doesn't try to eliminate uncertainty — it models and reasons with it. This makes AI systems more robust, flexible, and realistic.

Concept 1: Belief States

Instead of knowing "I am in state S", the agent believes "I might be in states S₁, S₂, or S₃"

What is a Belief State?

A belief state is a probability distribution over all possible world states. Instead of certainty about one state, the agent maintains degrees of belief about many possible states.

Example: Robot in a maze with noisy sensors might believe: "60% chance I'm at position A, 30% at position B, 10% at position C"

Interactive: Robot Localization with Uncertainty

A robot in a 4-room apartment receives noisy sensor data. Where is it?

🤖

Robot's Belief State

🍳
Kitchen
25%
🛏️
Bedroom
25%
🚿
Bathroom
25%
🛋️
Living Room
25%
Known: Sensor Likelihoods P(observation | room)
Sensor 🍳 Kitchen 🛏️ Bedroom 🚿 Bathroom 🛋️ Living
🌡️ Warm 70% 50% 30% 40%
💧 Water 60% 10% 90% 20%
🍽️ Food 80% 10% 10% 30%
These are the likelihoods used in Bayes' rule: P(room | obs) ∝ P(obs | room) × P(room)

Sensor observations (click to see Bayesian update):

Current Belief State: Uniform distribution - robot has no idea where it is (25% each room)
Belief State Evolution Timeline
Step 0: No observations yet
Key Concept

Belief State = Probability Distribution: The agent doesn't claim certainty. It maintains beliefs and updates them as evidence arrives. Each observation shifts the probabilities using Bayes' rule. Watch the chart to see Bayesian learning in action!

Concept 2: Quantifying Uncertainty - Confidence Intervals

More data = Less uncertainty: Watch how confidence intervals narrow as evidence accumulates

What is a Confidence Interval?

A confidence interval represents uncertainty as a range: "We're 95% confident the true value is between X and Y." As we collect more data, this interval narrows — uncertainty decreases!

Interactive: Estimating Average Temperature

Thermometer has ±2°C error. Take readings to see confidence interval narrow.

Readings Taken

0

Estimated Temp

Uncertainty (±)

Key Concept - Law of Large Numbers

More Data → Less Uncertainty: With 1 reading: ±3.9°C uncertainty. With 10 readings: ±1.2°C uncertainty. Uncertainty decreases by 1/√n where n = number of measurements. This is why probability-based AI improves with experience!

Real agents cannot see everything - they have limited, noisy sensors

What is Partial Observability?

The agent cannot directly observe the true state of the world. It only receives partial, noisy observations that provide clues about the true state. This is the reality for all real-world AI systems: robots, self-driving cars, medical diagnosis systems, etc.

Interactive: The Monty Hall Problem (Classic Probability Puzzle)

Behind one door is a prize 🏆. Behind the others are goats 🐐. You pick a door, then the host reveals a goat...

1
2
3
Click a door to start! You'll see how partial information (host revealing a goat) changes your beliefs.
Initial Belief (before host reveals):
P(Prize at Door 1) = 33.3%
P(Prize at Door 2) = 33.3%
P(Prize at Door 3) = 33.3%
Updated Belief (after host reveals):
Select a door to see belief update...
Key Concept

Partial Observability requires Belief States: You can't see what's behind the doors (partial observability), so you maintain beliefs. When the host reveals information, you update your beliefs using Bayesian inference. This is how real AI systems work with limited sensors!

Concept 3: Rational Decision-Making Under Ignorance

How do you make optimal decisions when you don't know the full situation?

What is Rational Decision-Making Under Ignorance?

When you can't know the true state with certainty, you choose actions based on expected outcomes weighted by their probabilities. This is the principle of maximum expected utility.

Interactive: Should You Bring an Umbrella?

You're leaving for work. Should you carry an umbrella? You don't know if it will rain...

Uncertainty: Will it rain today?
P(rain) = 30%
Option 1: Bring Umbrella
Option 2: No Umbrella
Key Concept

Expected Utility = Σ P(outcome) × Utility(outcome): You can't know if it will rain, but you can compute the expected value of each action by weighing outcomes by their probabilities. This is rational decision-making under uncertainty!

Concept 4: Connection to Real (Human) Intelligence

Humans naturally reason under uncertainty — AI should too

How Humans Use Probabilistic Reasoning Daily
Driving

Uncertainty: Will that car stop at the red light?
Belief: "Probably yes, but not certain"
Action: Slow down, be ready to brake

Probabilistic Reasoning
Diagnosis

Uncertainty: What disease causes these symptoms?
Belief: "Flu is most likely, but could be COVID"
Action: Test and treat most likely cause first

Probabilistic Reasoning
Communication

Uncertainty: What did they mean by "maybe"?
Belief: "Probably no, but keeping options open"
Action: Follow up, don't assume commitment

Probabilistic Reasoning
Shopping

Uncertainty: Will this product be good quality?
Belief: "4.5 stars, 80% chance it's good"
Action: Buy if expected value exceeds cost

Probabilistic Reasoning
The Insight

Humans don't wait for perfect information before acting. We constantly make decisions under uncertainty, weighing probabilities and outcomes. Probabilistic AI mirrors this natural intelligence.

Why This Matters for AI
❌ Logic-Based AI
  • Waits for certainty
  • Fails when information incomplete
  • Doesn't match human reasoning
  • Brittle in real world
✅ Probability-Based AI
  • Acts optimally under uncertainty
  • Handles incomplete information
  • Mirrors human cognition
  • Robust in real world

Key Takeaways

The Uncertainty Paradigm
  1. Belief States: Probability distributions over states, not single states
  2. Partial Observability: Accept limited, noisy observations
  3. Rational Action: Choose actions maximizing expected utility
  4. Bayesian Update: Revise beliefs with new evidence
Why This is Intelligence
  • Mirrors how humans actually think
  • Robust in uncertain, complex environments
  • Learns and improves from experience
  • Makes best decisions with available information
  • Uncertainty is not a bug — it's a feature
The Fundamental Lesson

"The greatest intelligence — human or artificial — is the ability to act rationally even when certainty is impossible. Probability theory provides the mathematical foundation for this kind of intelligence."

Next: Now that you understand the paradigm shift, let's learn the mathematics of probability! Continue to Topic 4 →