Lecture 12: Decision Theory & Rational Action Under Uncertainty

From Probability to Optimal Action Selection

Combining belief with preference to make rational choices
Back to Course Overview

Building on Lecture 11

In Lecture 11, we learned how Bayesian Networks represent uncertainty and compute beliefs. We can answer questions like: "What's the probability of disease given these symptoms?"

But knowing probabilities isn't enough for an intelligent agent. We need to act. Decision Theory combines probability with utility (preferences) to answer: "What should I do to maximize expected benefit?"

Lecture Overview

Decision theory is the foundation of rational action under uncertainty. It provides a principled framework for choosing actions when outcomes are uncertain and preferences matter.

Key Concepts: Utility Functions, Expected Utility, Decision Networks, Value of Information, MEU Principle
Applications: Medical Decisions, Robot Planning, Autonomous Vehicles, Game Playing, Resource Allocation

The Central Question

An intelligent agent faces a fundamental challenge:

How should I choose actions when I'm uncertain about outcomes?
Naive Approaches:
  • Choose action with best possible outcome (too optimistic)
  • Choose action with best worst-case outcome (too pessimistic)
  • Maximize expected outcome value (ignores preferences)
  • Random choice (not rational)
Decision Theory Solution:
  • Represent preferences with utility functions
  • Compute expected utility for each action
  • Choose action that maximizes expected utility (MEU)
  • Rational, principled, provably optimal

Main Topics

Interactive Demonstrations

Explore decision theory through interactive visualizations and calculators.

Practical Exercises

Master decision theory through hands-on problem-solving.

Utility Theory Problems

Elicit utility functions, calculate risk premiums, analyze preferences.

Decision Tree Problems

Build decision trees, compute expected utilities, find optimal policies.

Value of Information Problems

Calculate VPI, determine when to gather information, cost-benefit analysis.

Real-World Decision Problems

Apply decision theory to medicine, robotics, business, and ethical dilemmas.

Key Concepts Summary

Decision Theory Framework:
  • States (S): possible world configurations
  • Actions (A): agent's choices
  • Outcomes (O): consequences of actions
  • Probabilities (P): uncertainty quantification
  • Utilities (U): preference representation
MEU Principle:
  • EU(a) = Σ P(s|e) × U(Result(a, s))
  • Choose action a* = argmax EU(a)
  • Rational, consistent, optimal
  • Foundation for MDPs and RL
Decision Networks:
  • Extend Bayesian Networks
  • Chance, decision, utility nodes
  • Sequential decision modeling
  • Efficient inference algorithms
Value of Information:
  • VPI(E) = EU(with E) - EU(without E)
  • Always non-negative
  • Guides information gathering
  • Trade-off: cost vs. value
SE444: Artificial Intelligence | Lecture 12: Decision Theory & Rational Action Under Uncertainty