Artificial Intelligence: A Modern Approach Chapter 3: Solving Problems By Searching
Iterative Deepening A*
Uses f-cost as limit. Each iteration uses smallest f-cost that exceeds last iteration
Repeated State
A state that is exactly the same as a state previously expanded. Lead to by a loopy path.
Breadth-First Search
A strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors etc. Complete if goal is at finite depth. Optimal if path cost is non decreasing. Not practical considering time and space.
Consistency
AKA monotonicity. The cost of every successor of a node by an action is not great than the step cost from the successor plus the cost of reaching the goal from the successor.
Goal formulation
Based on the current situation and the agent's performance measure. The first step of problem solving.
Solution
The output in the form of a action sequence of a search algorithm that takes a problem as input. A path from the initial state to the goal state by actions through the state space.
Total Cost
The path cost plus the search cost which is the time complexity but can including the space somplexity.
Parent Node
The predecessor(s) of a given node
Search
The process of looking for a sequence of actions that reaches the goal.
Optimal Solution
The path with the lowest path cost among all solutions.
Diameter
The number of steps to reach any other state from the current state.
Leaf Node
A node with no children
Problem Formulation
The process of deciding what actions and states to consider, given a goal.
Depth-limited search
A depth first search with a predetermined depth limit. Incomplete if limit is less than depth. Not optimal if limit is greater than depth.
Iterative Deepening Search
A depth limited search that gradually increases the limit. Generates states multiple times.
Backtracking Search
A depth-first search that only generates one successor at a time.
Transition Model
A description of what each action does.
Goal
A desired outcome that is used to limit what an agent can and will do.
Path cost
A function that assigns a numeric cost to each path. Step cost is the cost of moving from a state by an action to its successor.
Search Tree
A graph with the initial state as the root, actions as branches, and successor states as nodes.
Admissible Heuristic
A heuristic that never overestimates the cost to reach the goal.
relaxed Problem
A problem with fewer restrictions on the actions.
Uninformed Search
Algorithms that are given no information about the problem other than its definition.
Informed Search
Algorithms that are given some guidance on where to look for solutions or knows if a state is more promising than another. uses problem-specific knowledge beyond the definition of the problem
Problem Solving Agent
An agent that uses atomic representation to solve problems.
Planning agent
An goal-based agent that uses factored or structures representations.
Expanding
Applying Legal actions to the current state.
Problem
Consists of Five Components: 1. Initial State 2. Actions 3. Transition Model 4. Path Cost
Generating
Creating a new set of states by performing an action on a given node.
Goal Test
Determines if a given state is a goal state.
Optimality
Does the strategy find the optimal solution?
SMA*
Drops the node with the highest f-value when memory is full. Backs up forgotten node f-value to its parent.
Pruned
Eliminating possibilities from consideration without having to examine them
Heuristic Function
Estimated cost of the cheapest path form the state as node to a goal state.
A* Search
Evaluates nodes by combining cost to get to node and the estimated cost to get from node to goal. Estimated cost of getting to goal through node. Complete and optimal. Tree is optimal if admissible. Graph is optimal if consistent
Best-first search
Expands nodes base on evaluation function. Use priority queue with estimated cost.
Depth-first search
Expands the deepest node. Implemented using a LIFO queue. Not optimal. Low space complexity.
Greedy Best First Search
Expands the node that is closest to the goal. Incomplete even if finite
Uniform-cost search
Expands the node with the lowest path cost. Done by storing the frontier as a priority queue. Goal test applied during expansion not generation. Expands nodes in order of optimality. Complete as long as every step cost is greater than some small positive constant.
Time Complexity
How long does it take to find a solution?
Space Complexity
How much memory is needed to perform the search?
Open-loop
Ignoring the percepts received from an environment because the agent already knows what will happen based on the solution.
State Space
Initial state, Actions, Transition Model. Represented by a graph. A path though this is a sequence of states connected by a sequence of actions.
Completeness
Is the algorithm guaranteed to find a solution when there is one.
Redundant Paths
More than one way to get from one state to another.
Execution
Performing the action sequence recommended by a solution.
Actions
Possible moves that an agent can make in its current state. They are applicable given the state
Subproblem
Problem definitions with a more general goal than the original problem definitions
Abstraction
Removing details from a representation.
Atomic Representation
States of world consider whole with no internal structure visible to the problem solving algorithm.
Pattern Database
Storing the exact solution cost all possible solutions to subproblem instance.
disjoint pattern database
The combination of two or more subproblem pattern databases which includes all of one and all the solutions that involve moves of the others.
Branching Factor
The maximum number of successors of any node.
Depth
The number of steps along the path from any node.
Successor
The result states reachable from a given state by performing an action.
Frontier
The set of all leaf node available for expansion at a given point. As known as open list.
Explored Set
The set of all states that have been expanded. AKA closed list.
Initial State
The state the agent starts in.
Child Node
The successor(s) of a given node
Search Strategy
The the search algorithm's decision of which node to next.
Bidirectional Search
Two searches: one forward form the root and one backward form the goal. Goal test replaced with frontier intersect test. Requires a method of computing predecessors.
Recursive best-first search
uses f-limit variable to keep track of best alternative path from any ancestor of the current node. Replaces f-value with best f-value of nodes children.