Back to Lecture 9

Forward Search Planning

Progression: Starting from the initial state and moving forward to the goal

What is Forward Search?

Forward Search (Progression)

Forward search is a planning algorithm that starts from the initial state and applies actions to progress toward the goal.

Think of it like exploring a maze: you start at the entrance and try different paths until you reach the exit.

How It Works
  1. Start: Begin with initial state
  2. Expand: Find all applicable actions
  3. Apply: Execute each action → new states
  4. Check: Is goal reached?
  5. Repeat: Continue from new states
Key Characteristics
  • Sound: Finds valid plans
  • Complete: Finds plan if it exists
  • ⚠️ Can be slow: Many states to explore
  • 📊 Uses heuristics: To guide search
Forward Search Visual Overview
Initial
State
Apply
Actions
Successor
States
Goal!

The Progression Algorithm

The formal algorithm for forward state-space search:

function FORWARD-SEARCH(problem) returns solution or failure
    // Initialize frontier with initial state
    frontier ← {problem.INITIAL}
    explored ← {}
    
    // NOTE: This is a GENERIC search skeleton!
    // It becomes A*, BFS, or DFS depending on how you manage the frontier:
    //   - BFS: frontier is a QUEUE (FIFO)
    //   - DFS: frontier is a STACK (LIFO)
    //   - A*:  frontier is a PRIORITY QUEUE ordered by f(n) = g(n) + h(n)
    
    while frontier is not empty do
        // Choose a state from frontier (strategy depends on data structure)
        state ← POP(frontier)
        
        // Check if goal is reached
        if problem.GOAL-TEST(state) then
            return SOLUTION(state)
        
        // Mark state as explored
        explored ← explored ∪ {state}
        
        // Expand state: find applicable actions
        for each action in APPLICABLE-ACTIONS(state) do
            // Apply action to get successor state
            successor ← RESULT(state, action)
            
            // Add to frontier if not explored
            if successor ∉ explored and successor ∉ frontier then
                frontier ← frontier ∪ {successor}
    
    return failure
Key Data Structures
  • Frontier: States to explore
  • Explored: Already visited states
  • Path: Sequence of actions
Key Operations
  • APPLICABLE-ACTIONS: Find valid actions
  • RESULT: Apply action to state
  • GOAL-TEST: Check if goal reached
Properties
  • Complete: Yes (if finite)
  • Optimal: Depends on strategy
  • Time: O(bd)

Simple Example: Forward Search with Robot Grasping

Let's walk through forward search step-by-step with a simple robot grasping problem!

Problem Statement
Initial State:
At(Robot, RoomA) At(Box, RoomB) Empty(Robot)
Goal:
Holding(Robot, Box)

Available Actions: Move(from, to), Grasp(box)

Forward Search Step-by-Step
Step 0: Initial State
Current State:
At(Robot, RoomA) At(Box, RoomB) Empty(Robot)
Applicable Actions:
Move(RoomA, RoomB)
Precond: At(Robot, RoomA) ✓
Grasp(Box)
Precond: At(Robot, ?loc) ∧ At(Box, ?loc) ✗
Is goal reached? No - We need Holding(Robot, Box) but we have Empty(Robot)

Choose Action: Move(RoomA, RoomB)

Step 1: After Move(RoomA, RoomB)
Current State:
At(Robot, RoomB) At(Box, RoomB) Empty(Robot)
Robot moved to RoomB!
Applicable Actions:
Move(RoomB, RoomA)
Precond: At(Robot, RoomB) ✓
Grasp(Box)
Precond: At(Robot, RoomB) ∧ At(Box, RoomB) ∧ Empty(Robot) ✓✓✓
Is goal reached? No - Still have Empty(Robot), need Holding(Robot, Box)

Choose Action: Grasp(Box)

Step 2: After Grasp(Box) - GOAL REACHED!
Current State:
At(Robot, RoomB) At(Box, RoomB) Holding(Robot, Box)
Robot grasped the box!
Goal Achieved!
Is goal reached? YES! State contains Holding(Robot, Box) ✓
Solution Plan
Actions Sequence:
  1. Move(RoomA, RoomB)
  2. Grasp(Box)
Search Statistics:
  • States Explored: 3
  • Actions Tried: 2
  • Plan Length: 2
  • Depth: 2
Search Tree Visualization
S₀
At(R, A)
At(B, B)
Empty(R)
S₁
Move(A,B)
At(R, B)
At(B, B)
Empty(R)
S₂ ⭐
Grasp(Box)
At(R, B)
At(B, B)
Holding(R, B)
S₃
Move(B,A)
Not explored
(goal found)
Key Insights
  • Forward search starts from initial state
  • At each step, we find applicable actions
  • We apply actions to generate successor states
  • We check each state against the goal
  • Search stops when goal is reached
Notice
  • Grasp(Box) was NOT applicable in S₀
  • We had to Move first to satisfy preconditions
  • The search tree shows unexplored branches
  • In real problems, there are many more branches!

Interactive Demo: Forward Search in Action

Watch forward search solve the Spare Tire problem step-by-step!

Spare Tire Problem
Problem: Initial: flat-tire on axle, spare in trunk. Goal: spare-tire on axle.
Current State (Step 0)
Search Tree
Applicable Actions from Current State
What's Happening in This Step?
0
States Explored
1
Frontier Size
0
Actions Tried
0
Current Depth
Goal Reached!

Solution Plan:

Finding Applicable Actions

An action is applicable if all its preconditions are satisfied in the current state.

APPLICABLE-ACTIONS(state)

Returns the set of all actions whose preconditions are met:

function APPLICABLE-ACTIONS(state) returns set of actions
    applicable ← {}
    
    for each action in ALL-ACTIONS do
        if PRECONDITIONS(action) ⊆ state then
            applicable ← applicable ∪ {action}
    
    return applicable
Example: Finding Applicable Actions
Current State:
tire-at(flat-tire, axle) tire-at(spare-tire, trunk)
✅ Applicable Actions:
remove(flat-tire, axle)
✓ Precond: tire-at(flat-tire, axle)
remove(spare-tire, trunk)
✓ Precond: tire-at(spare-tire, trunk)
⚠️ Not Applicable: put-on(spare-tire) requires tire-at(spare-tire, ground) which is NOT in current state.

Ground States & Grounding

What is Grounding?

Grounding is the process of replacing variables in action schemas with specific objects to create executable (ground) actions.

Action Schema (Variables)
(:action move
  :parameters (?r - robot ?from ?to - room)
  :precondition (at ?r ?from)
  :effect (and 
    (not (at ?r ?from))
    (at ?r ?to))
)

Variables: ?r, ?from, ?to

Ground Actions (Constants)
move(robot1, kitchen, bedroom)
move(robot1, bedroom, kitchen)
move(robot1, kitchen, bathroom)
...and more combinations

Constants: robot1, kitchen, bedroom

Grounding Process
1 Action Schema
with n variables
Many Ground Actions
(combinations of objects)
Combinatorial Explosion: If you have 3 robots and 4 rooms, the move action generates: 3 robots × 4 from-locations × 3 to-locations = 36 ground actions!
Ground States

A ground state contains only ground atomic fluents (no variables, just constants).

Example Ground State:
at(robot1, kitchen) empty(robot1) at(box-a, bedroom)
All fluents are ground:
  • ✓ No variables (?)
  • ✓ Only specific objects
  • ✓ Can be directly evaluated

Branching Factor Challenge

The Branching Factor Problem

Forward search can generate a huge number of successor states because many actions may be applicable from any given state.

What is Branching Factor?

The branching factor (b) is the average number of successor states generated from each state.

b

Average successors per state

Why It Matters
  • Time Complexity: O(bd)
  • Space Complexity: O(bd)
  • Exponential Growth: Very fast!
  • Example: b=10, d=5 → 100,000 states!
Branching Factor Visualization

States at each level:
Solutions to High Branching Factor
  1. Heuristics: Guide search toward goal
  2. Pruning: Eliminate useless actions
  3. Domain-specific knowledge: Add constraints
  4. Better algorithms: A*, IDA*, etc.

Heuristics for Forward Search

What are Heuristics?

A heuristic is a function h(state) that estimates the cost to reach the goal from a given state. It guides the search toward promising states.

Without Heuristic

Blind search - explores all directions equally

❌ Slow and inefficient

With Good Heuristic

Guided search - focuses on promising states

✅ Fast and efficient

Bad Heuristic

Misleading - may miss good paths

⚠️ Can hurt performance

Common Heuristics for Planning
1. Goal Count Heuristic

Count number of unsatisfied goal fluents

h(state) = |goal - state|

Example: If goal has 3 fluents and state has 1 of them, h(state) = 2

2. Relaxed Planning Graph

Ignore delete effects and find shortest path

More accurate but more expensive to compute

3. Delete Relaxation

Assume actions only add effects (never delete)

Admissible: Never overestimates actual cost

4. Pattern Database

Precompute costs for simplified problem

Fast lookup after initial computation

Desirable Heuristic Properties
Admissible

Never overestimates the true cost
h(state) ≤ h*(state)

Consistent (Monotonic)

h(n) ≤ cost(n, n') + h(n')
Guarantees optimality with A*

Informative

Provides good guidance
Higher values = better (if admissible)

Efficient to Compute

Fast calculation
Trade-off: accuracy vs speed

Key Takeaways

✅ Forward Search Strengths
  • Simple and intuitive
  • Complete (finds solution if exists)
  • Sound (produces valid plans)
  • Works well with heuristics
  • Natural for most problems
⚠️ Forward Search Challenges
  • High branching factor
  • Can explore irrelevant states
  • Exponential time/space complexity
  • Needs good heuristics for efficiency
  • Many ground actions to consider
Summary
Start from Initial State

Begin where you are

Explore Applicable Actions

Find what you can do

Reach the Goal

Use heuristics to guide

Next: Learn about Backward Search and how it compares to Forward Search! Continue to Backward Search →