Back to Lecture 10

Bayesian Inference: Complete Problems

Master complete Bayesian inference with step-by-step updates, theoretical understanding, and Python implementation

🏥 Problem 1: Complete Medical Diagnosis

Comprehensive Cancer Screening

A 55-year-old patient visits their primary care physician concerned about cancer symptoms. They undergo a comprehensive screening that includes multiple tests and factors.

Given Information:
• Patient has family history of cancer (mother diagnosed at age 60)
• Shows mild symptoms: fatigue and weight loss
• Lives in area with 1.2% cancer prevalence
• First screening test shows suspicious results
• Follow-up biopsy is recommended

Base Rate Information

General Population: 1.2% cancer rate

Family History: 3× increased risk

Patient's Prior: 3.6% (1.2% × 3)

Screening Test Characteristics
Test Type Sensitivity Specificity
Initial Screening 92% 87%
Follow-up Biopsy 95% 93%
Complete Bayesian Analysis (Click to expand)
1
Establish Prior Belief

Calculate initial probability of cancer before any testing:

P(Cancer | Family History) = General Rate × Risk Factor
P(Cancer) = 0.012 × 3 = 0.036 (3.6%)
Interpretation: Patient's prior risk is elevated due to family history, starting at 3.6% instead of the general population's 1.2%.
2
Update with Initial Screening Test

Patient tests POSITIVE on initial screening. Update beliefs:

P(+) = P(+|Cancer) × P(Cancer) + P(+|¬Cancer) × P(¬Cancer)
P(+) = 0.92 × 0.036 + 0.13 × 0.964 = 0.03312 + 0.12532 = 0.15844
P(Cancer | +) = (0.92 × 0.036) / 0.15844 = 0.03312 / 0.15844 = 0.209 (20.9%)
3
Update with Follow-up Biopsy

Biopsy also comes back POSITIVE. Final update:

P(++ | Cancer) = P(++|Cancer) × P(+|Cancer) = 0.95 × 0.92 = 0.874
P(++ | ¬Cancer) = P(++|¬Cancer) × P(+|¬Cancer) = 0.07 × 0.13 = 0.0091
P(++) = 0.874 × 0.209 + 0.0091 × 0.791 = 0.1827 + 0.0072 = 0.1899
P(Cancer | ++) = (0.874 × 0.209) / 0.1899 = 0.1827 / 0.1899 = 0.962 (96.2%)
Final Diagnosis Probability
96.2%

Recommendation: High confidence in cancer diagnosis. Immediate treatment planning required.


Exercises

Exercise 1.1: Understanding Bayesian Updates

Question: Explain why the probability increased from 3.6% (prior) to 20.9% (after first test) to 96.2% (after second test). What role does each piece of evidence play?

Exercise 1.2: Alternative Outcome

Scenario: Suppose the initial screening test came back NEGATIVE. What would the patient's cancer probability be after that single negative test?

🔗 Problem 2: Multi-step Sequential Inference

Autonomous Vehicle Navigation

An autonomous vehicle is navigating through a city. It starts with uncertain beliefs about road conditions and receives multiple pieces of evidence sequentially. Each observation updates its beliefs about whether the road ahead is clear or blocked.

Initial Belief: 60% chance road is clear (based on historical data)
Evidence arrives sequentially: GPS, LIDAR, Camera, Traffic sensors

Initial State

P(Clear) = 0.60

P(Blocked) = 0.40

Sensor Reliabilities
Sensor P(Correct|Clear) P(Correct|Blocked)
GPS 95% 85%
LIDAR 98% 92%
Camera 88% 94%
Traffic 90% 96%
Sequential Evidence Updates (Click to expand)
1
GPS Reading: Road appears clear
P(GPS Clear | Clear) = 0.95, P(GPS Clear | Blocked) = 0.15
P(GPS Clear) = 0.95 × 0.60 + 0.15 × 0.40 = 0.57 + 0.06 = 0.63
P(Clear | GPS Clear) = (0.95 × 0.60) / 0.63 = 0.57 / 0.63 = 0.905 (90.5%)
2
LIDAR Reading: Road appears clear
P(LIDAR Clear | Clear) = 0.98, P(LIDAR Clear | Blocked) = 0.08
P(LIDAR Clear) = 0.98 × 0.905 + 0.08 × 0.095 = 0.8869 + 0.0076 = 0.8945
P(Clear | LIDAR Clear) = (0.98 × 0.905) / 0.8945 = 0.8869 / 0.8945 = 0.991 (99.1%)
3
Camera Reading: Road appears BLOCKED
P(Camera Blocked | Clear) = 0.12, P(Camera Blocked | Blocked) = 0.94
P(Camera Blocked) = 0.12 × 0.991 + 0.94 × 0.009 = 0.1189 + 0.0085 = 0.1274
P(Clear | Camera Blocked) = (0.12 × 0.991) / 0.1274 = 0.1189 / 0.1274 = 0.933 (93.3%)
Surprise Evidence: Camera suggests blockage, reducing confidence from 99.1% to 93.3%. This is conflicting evidence that the vehicle must reconcile.
4
Traffic Sensor: Road appears clear
P(Traffic Clear | Clear) = 0.90, P(Traffic Clear | Blocked) = 0.04
P(Traffic Clear) = 0.90 × 0.933 + 0.04 × 0.067 = 0.8397 + 0.0027 = 0.8424
P(Clear | Traffic Clear) = (0.90 × 0.933) / 0.8424 = 0.8397 / 0.8424 = 0.997 (99.7%)
Final Decision
99.7%

Vehicle Decision: Road is clear. Proceed with normal speed.


Exercises

Exercise 2.1: Sequential Evidence Analysis

Question: Analyze how the vehicle's belief changed through the four sensor readings. Why did the camera reading cause a decrease in confidence, while the other sensors increased it?

Exercise 2.2: Different Evidence Order

Scenario: Suppose the camera reading (blocked) came first, followed by GPS (clear), LIDAR (clear), and traffic (clear). What would the final probability be?

Hint: Start with prior P(Clear) = 0.60, then apply updates in the sequence: Camera → GPS → LIDAR → Traffic

🔍 Problem 3: Explaining Away Scenario

Burglar Alarm Network

Consider a classic Bayesian network: A burglar alarm that can be triggered by either a burglary or an earthquake. You receive a phone call from your neighbor saying they heard your alarm. Two possible explanations: burglary or earthquake. However, knowing there was an earthquake explains away the need for burglary.

Network Structure: Earthquake → Alarm ← Burglary

Prior Probabilities

P(Burglary) = 0.01

P(Earthquake) = 0.02

P(Alarm|B,E) = 0.95

Conditional Probabilities
Alarm | Causes Probability
P(Alarm | B, E) 95%
P(Alarm | B, ¬E) 94%
P(Alarm | ¬B, E) 29%
P(Alarm | ¬B, ¬E) 0.1%
Network Structure

Earthquake → Alarm ← Burglary

Alarm has two possible causes. Knowing one cause can explain away the likelihood of the other.

Explaining Away Analysis (Click to expand)
1
Alarm Sounds (No Additional Information)

You hear the alarm. What's the probability of burglary?

P(Burglary | Alarm) = [P(Alarm | Burglary) × P(Burglary)] / P(Alarm)
P(Alarm) = 0.94 × 0.01 + 0.29 × 0.02 + 0.001 × 0.98 × 0.98 ≈ 0.0094 + 0.0058 + 0.001 = 0.0162
P(Burglary | Alarm) = (0.94 × 0.01) / 0.0162 ≈ 0.0094 / 0.0162 = 0.580 (58.0%)
2
Radio Reports Earthquake (Explaining Away)

Now you learn there was an earthquake. How does this affect P(Burglary|Alarm)?

P(Burglary | Alarm, Earthquake) = [P(Alarm | Burglary, Earthquake) × P(Burglary | Earthquake)] / P(Alarm | Earthquake)
P(Alarm | Earthquake) = P(Alarm | B, E) × P(B) + P(Alarm | ¬B, E) × P(¬B) = 0.95 × 0.01 + 0.29 × 0.99
P(Alarm | Earthquake) = 0.0095 + 0.2871 = 0.2966
P(Burglary | Alarm, Earthquake) = (0.95 × 0.01) / 0.2966 ≈ 0.0095 / 0.2966 = 0.032 (3.2%)
Explaining Away Effect: Probability dropped from 58.0% to 3.2%! The earthquake explains the alarm, making burglary much less likely.
Explaining Away Effect
3.2%

Burglary probability drops from 58% to 3.2% when earthquake is known. Alternative explanations reduce belief in competing hypotheses.


Exercises

Exercise 3.1: Explaining Away Concept

Question: Explain the "explaining away" effect. Why does learning about the earthquake reduce the probability of burglary, even though we still heard the alarm?

Exercise 3.2: Reverse Explaining Away

Scenario: You hear the alarm, but then learn there was NO earthquake. How does this affect P(Burglary|Alarm,¬Earthquake)?

Calculate: Show that burglary becomes much more likely when earthquake is ruled out.

🌍 Problem 4: Real-World Application (Student Choice)

Choose Your Own Application

Select a real-world scenario where Bayesian inference would be useful. Design a complete problem with prior beliefs, evidence, and sequential updates. This allows you to apply Bayesian reasoning to a domain you're interested in or familiar with.

Suggested Domains
  • Medical diagnosis
  • Sports predictions
  • Financial markets
  • Quality control
  • Risk assessment
  • Customer behavior
  • Environmental monitoring
  • Traffic prediction
Requirements
  • Clear scenario description
  • Prior probabilities
  • Evidence with likelihoods
  • Sequential updates
  • Final decision/conclusion
  • Real-world implications
Your Bayesian Inference Project
Design Your Problem

Step 1: Choose a Domain and Scenario

Describe your chosen application and why Bayesian inference is appropriate for it.

Step 2: Define Your Model

Specify priors, evidence types, and likelihoods for your scenario.

Step 3: Work Through an Example

Provide concrete numbers and calculate the Bayesian updates.

🐍 Problem 5: Python Implementation Exercise

Implementing Bayesian Inference in Python

Write Python functions to perform Bayesian inference. You'll implement the core Bayes' theorem calculation and create a class for sequential belief updates. This will help you understand how Bayesian inference works computationally.

Theoretical Questions

Question 5.1: Bayes' Theorem Implementation

Question: Explain the components of Bayes' theorem and how they map to function parameters.

Coding Exercise

Implement Bayesian Inference Functions
Task 1: Basic Bayes' Theorem Function

Write a function that computes posterior probability using Bayes' theorem.

Function Specification:
def bayes_theorem(prior, likelihood, evidence_probability):
    """Calculate posterior probability using Bayes' theorem

    Args:
        prior: P(H) - prior probability of hypothesis
        likelihood: P(E|H) - likelihood of evidence given hypothesis
        evidence_probability: P(E) - total probability of evidence

    Returns:
        posterior: P(H|E) - posterior probability
    """
    # Your code here
    return posterior
Example Usage:
prior = 0.01  # P(Disease)
likelihood = 0.95  # P(Positive|Disease)
evidence_prob = 0.06  # P(Positive)
posterior = bayes_theorem(prior, likelihood, evidence_prob)
print(f"Posterior probability: {posterior:.3f}")
# Write your bayes_theorem function here
def bayes_theorem(prior, likelihood, evidence_probability):
    """Calculate posterior probability using Bayes' theorem

    Args:
        prior: P(H) - prior probability of hypothesis
        likelihood: P(E|H) - likelihood of evidence given hypothesis
        evidence_probability: P(E) - total probability of evidence

    Returns:
        posterior: P(H|E) - posterior probability
    """
    # Your code here
    pass

# Example usage:
# prior = 0.01  # P(Disease)
# likelihood = 0.95  # P(Positive|Disease)
# evidence_prob = 0.06  # P(Positive)
# posterior = bayes_theorem(prior, likelihood, evidence_prob)
# print(f"Posterior probability: {posterior:.3f}")
Task 2: Sequential Belief Updates Class
Implement a BeliefUpdater Class

Create a class that maintains current beliefs and updates them with new evidence.

Class Specification:
class BeliefUpdater:
    """Class for sequential Bayesian belief updates"""

    def __init__(self, initial_belief):
        """Initialize with prior belief P(H)"""
        self.current_belief = initial_belief

    def update_belief(self, likelihood, evidence_probability):
        """Update belief with new evidence

        Args:
            likelihood: P(E|H) - likelihood of new evidence given hypothesis
            evidence_probability: P(E) - total probability of evidence

        Returns:
            new_belief: Updated P(H|E)
        """
        # Your code here
        pass

    def get_current_belief(self):
        """Return current belief probability"""
        # Your code here
        pass
Example Usage:
updater = BeliefUpdater(0.01)  # Start with 1% belief
updater.update_belief(0.95, 0.06)  # First test
print(f"After first test: {updater.get_current_belief():.3f}")
updater.update_belief(0.90, 0.15)  # Second test
print(f"After second test: {updater.get_current_belief():.3f}")
# Write your BeliefUpdater class here
class BeliefUpdater:
    """Class for sequential Bayesian belief updates"""

    def __init__(self, initial_belief):
        """Initialize with prior belief P(H)"""
        self.current_belief = initial_belief

    def update_belief(self, likelihood, evidence_probability):
        """Update belief with new evidence

        Args:
            likelihood: P(E|H) - likelihood of new evidence
            evidence_probability: P(E) - probability of this evidence

        Returns:
            new_belief: Updated P(H|E)
        """
        # Your code here
        pass

    def get_current_belief(self):
        """Return current belief probability"""
        # Your code here
        pass

# Example usage:
# updater = BeliefUpdater(0.01)  # Start with 1% belief
# updater.update_belief(0.95, 0.06)  # First test
# print(f"After first test: {updater.get_current_belief():.3f}")
# updater.update_belief(0.90, 0.15)  # Second test
# print(f"After second test: {updater.get_current_belief():.3f}")
Test Your Implementation

Challenge: Use your functions to solve the cancer screening problem from Problem 1.

Verify that you get the same results: 3.6% → 20.9% → 96.2%

# Write code to test your functions with the cancer screening example from Problem 1
# Use your bayes_theorem function and BeliefUpdater class
# Verify you get: 3.6% → 20.9% → 96.2%

# Test the basic function:
# prior = 0.036  # 3.6%
# likelihood = 0.92  # First test sensitivity
# evidence_prob = 0.07664  # P(+) for first test
# posterior1 = bayes_theorem(prior, likelihood, evidence_prob)

# Test the sequential class:
# updater = BeliefUpdater(0.036)  # Start with 3.6%
# updater.update_belief(0.92, 0.07664)  # First test
# updater.update_belief(0.95, ?)  # Second test (you need to calculate P(++))
# print(f"Final belief: {updater.get_current_belief():.3f}")