Progression: Starting from the initial state and moving forward to the goal
Forward search is a planning algorithm that starts from the initial state and applies actions to progress toward the goal.
Think of it like exploring a maze: you start at the entrance and try different paths until you reach the exit.
The formal algorithm for forward state-space search:
function FORWARD-SEARCH(problem) returns solution or failure // Initialize frontier with initial state frontier ← {problem.INITIAL} explored ← {} // NOTE: This is a GENERIC search skeleton! // It becomes A*, BFS, or DFS depending on how you manage the frontier: // - BFS: frontier is a QUEUE (FIFO) // - DFS: frontier is a STACK (LIFO) // - A*: frontier is a PRIORITY QUEUE ordered by f(n) = g(n) + h(n) while frontier is not empty do // Choose a state from frontier (strategy depends on data structure) state ← POP(frontier) // Check if goal is reached if problem.GOAL-TEST(state) then return SOLUTION(state) // Mark state as explored explored ← explored ∪ {state} // Expand state: find applicable actions for each action in APPLICABLE-ACTIONS(state) do // Apply action to get successor state successor ← RESULT(state, action) // Add to frontier if not explored if successor ∉ explored and successor ∉ frontier then frontier ← frontier ∪ {successor} return failure
Let's walk through forward search step-by-step with a simple robot grasping problem!
Available Actions:
Move(from, to),
Grasp(box)
Choose Action: Move(RoomA, RoomB)
Choose Action: Grasp(Box)
Move(RoomA, RoomB)Grasp(Box)Watch forward search solve the Spare Tire problem step-by-step!
Solution Plan:
An action is applicable if all its preconditions are satisfied in the current state.
Returns the set of all actions whose preconditions are met:
function APPLICABLE-ACTIONS(state) returns set of actions applicable ← {} for each action in ALL-ACTIONS do if PRECONDITIONS(action) ⊆ state then applicable ← applicable ∪ {action} return applicable
put-on(spare-tire) requires tire-at(spare-tire, ground) which is NOT in current state.
Grounding is the process of replacing variables in action schemas with specific objects to create executable (ground) actions.
(:action move
:parameters (?r - robot ?from ?to - room)
:precondition (at ?r ?from)
:effect (and
(not (at ?r ?from))
(at ?r ?to))
)
Variables: ?r, ?from, ?to
move(robot1, kitchen, bedroom)move(robot1, bedroom, kitchen)move(robot1, kitchen, bathroom)
Constants: robot1, kitchen, bedroom
n variables
move action generates:
3 robots × 4 from-locations × 3 to-locations = 36 ground actions!
A ground state contains only ground atomic fluents (no variables, just constants).
?)Forward search can generate a huge number of successor states because many actions may be applicable from any given state.
The branching factor (b) is the average number of successor states generated from each state.
Average successors per state
A heuristic is a function h(state) that estimates the
cost to reach the goal from a given state. It guides the search toward promising states.
Blind search - explores all directions equally
❌ Slow and inefficient
Guided search - focuses on promising states
✅ Fast and efficient
Misleading - may miss good paths
⚠️ Can hurt performance
Count number of unsatisfied goal fluents
h(state) = |goal - state|
Example: If goal has 3 fluents and state has 1 of them, h(state) = 2
Ignore delete effects and find shortest path
More accurate but more expensive to compute
Assume actions only add effects (never delete)
Admissible: Never overestimates actual cost
Precompute costs for simplified problem
Fast lookup after initial computation
Never overestimates the true cost
h(state) ≤ h*(state)
h(n) ≤ cost(n, n') + h(n')
Guarantees optimality with A*
Provides good guidance
Higher values = better (if admissible)
Fast calculation
Trade-off: accuracy vs speed
Begin where you are
Find what you can do
Use heuristics to guide