V = 0 + 0.9Γ20 = 18
0.8Γ(0+0.9Γ20) + 0.2Γ(mixed)
β 14.4
0.6Γ(0+0.9Γ20) + 0.4Γ(mixed)
β 10.8 (risky!)
Why We Need Rational Decision-Making Under Uncertainty
Classical AI planning (Lectures 4-5) assumes:
Consider a robot trying to navigate from Point A to Point B:
The Problem: Deterministic planning says "move forward 10 times to reach goal." But with 90% success per move, probability of 10 perfect moves = 0.9ΒΉβ° β 35%. The plan fails 65% of the time!
In the real world, uncertainty is ubiquitous. Actions have probabilistic outcomes, observations are noisy, and the environment is unpredictable. We need a framework that embraces uncertainty rather than ignoring it.
Bayesian Networks (Lecture 11) taught us how to:
Question: What do I believe?
Example:
Question: What should I do?
Example:
Inference tells us beliefs. Decision theory tells us actions.
Decision Theory = Probability Theory (beliefs) + Utility Theory (preferences) + Action Selection
Decision: Surgery vs. Medication vs. Wait
Uncertainty:
Preferences: Health outcome, cost, risk tolerance, quality of life
Decision: Safe path vs. Fast path vs. Explore
Uncertainty:
Preferences: Time to goal, energy consumption, collision risk, mission success
Decision: Launch Product vs. More R&D vs. Pivot
Uncertainty:
Preferences: Profit, market share, long-term growth, risk exposure
Decision: Lane change vs. Brake vs. Maintain speed
Uncertainty:
Preferences: Safety (maximize), travel time, passenger comfort, legality
All these scenarios share three elements: (1) Uncertain outcomes Β· (2) Multiple action choices Β· (3) Trade-offs between competing preferences. Decision theory provides a unified framework for all of them.