Ch. 3 Solving problems by searching
Composite heuristic
Uses whichever heuristic is most accurate on the node. *h(n) = max{h1(n), ..., hm(n)}
Single-state problem
1) Fully observable, deterministic *Agent knows exactly what state it will be in, so the solution is a sequence.
Admissible heuristic
A heuristic that never overestimates the cost to reach the goal. *It always thinks the cost is less than what it actually is.
Pattern database
1) Alternative to a relaxed problem heuristic. 2) Stores the exact solution costs for every possible sub-problem instance of a problem. *Heuristic cost is the cost of the subproblem with the same configuration as the current problem.
4 Ways to measure problem-solving performance
1) Completeness. If there is a solution, does it find it? 2) Optimality. Is the solution found the best? 3) Time complexity. How long to find a solution? (# nodes generated) 4) Space complexity. How many nodes were stored in memory? (# nodes stored in mem) *b: branching factor, m: maximum length of any path in statespace, d: depth of shallowest goal
Depth limited DFS (DLDFS)
1) Depth limit is added to DFS. 2) Doesn't expand a node if its depth is longer than limit (Usually length of action sequence) *Diameter of state space (Max dist from any state to any other state) is a good limit for dldfs. (Limit l) *Completeness: No (If solution at depth further down) *Optimality: No *Time complexity: O(b^l) *Space complexity: O(bl)
A* Star search
1) Evaluation function is a combination of current path cost g(n), and heuristic path to goal cost of h(n) *f(n) = g(n) + h(n) = estimated cost of cheapest solution THROUGH state n. *Fans out from start node, adding nodes in concentric bands of increasing f-cost. ***Not answer to all search needs. Nodes on a contour may still be exponential in number. ***Complexity of A* often makes it impractical to insist on finding an optimal solution. *Complete: yes *Optimal: Yes, if heuristic is admissible. *Space complexity: Very poor. Not good for many large scale problems.
5 components defining a "Problem" KNOW!!!!!!!!!
1) INITIAL STATE. (Also potentially all possible states) State the agent starts in. In(s) 2) ACTIONS available to the agent. A set of possible actions. Action(s) 3) TRANSITION MODEL. Returns the state that results from performing action a in state s. Result(s, a) 4) GOAL TEST. Determines whether a state is the goal. IsGoal(s) 5) PATH COST. Function that assigns a numeric cost to each path. Cost(path)
Proof of A* Star optimal (Show 2 things, and derive conclusion from them)
1) If h(n) is consistent, then the values of f(n) along any path are non-decreasing. *g(n') = g(n) + c(n, a, n') *f(n') = g(n') + h(n') = g(n) + c(n, a, n') + h(n') >= g(n) + h(n) = f(n) (The last >= comes from property of consistency) *Aka, path after any action is greater than path of not action. 2) Whenever A* selects a node n for expansion, the optimal path to that node has been found. *Proof by contradiction: If not the case, then there is some other node n' in the frontier on optimal path to start node n. But, then the node n' would have been selected for expansion next. Thus, contradiction. ****Based on 1 and 2, the nodes are selected from the graph in non-decreasing order. Therefore, the first goal node selected must be the goal of least cost.
State space
1) Initial state 2) Actions 3) Transition model *Define all states reachable.
Search strategy
1) Initialize frontier with initial state. 2) While frontier isn't empty: *Select next node to process from frontier. *Check if isGoal. If goal, return sequence. *(If graph) Add node to visited set. *If not goal and not visited, expand selected node. *Searches differ primarily in how they select the next node to process.
Iterative deepening
1) Iterative DLDFS. 2) Calls depth limited DFS at increasing depth limits. *Best search to use many times. *Similar to BFS search structure, but with dfs space complexity *Complete: Yes *Optimal: Yes (If step cost = 1) *Time complexity: O(b^d) *Space complexity: O(bd)
Breadth first search BFS
1) Nodes are stored in a first-in-first-out queue. 2) Solutions are explored level wise. Aka, all nodes @ a given level are explored before moving on to the next level. *Complete: Yes *Optimal: Yes *Time complexity: b + b^2 + ... + b^d = O(b^d) *Space complexity: O(b^d) (b^d nodes in frontier @ any given time)
Depth first search DFS
1) Nodes are stored in a stack 2) Always expand deepest node in frontier next. *Basic workhorse of AI bc of good space complexity. *Complete: No (Fails in infinite depth state spaces) *Optimal: No (Returns first solution, not best) *Time complexity: O(b^m) *Space complexity: O(m*b) (Linear space!!!)
Contingency problem
1) Non-deterministic and/or partially observable. *Percepts provide new information about current state on the fly. *Solution is a contingency plan or a policy. *Search and execution must often be interleaved in this kind of scenario.
Conformant problem
1) Non-observable *Agent may have no idea where the solution is. Solution is a sequence of actions.
Simplified memory-bounded A* (SMA*)
1) Proceeds like A*. Keeps expanding best leaf until memory is full. 2) Once memory is full, two steps occur. a) The worst leaf node is dropped from the frontier. b) The parent of the worst node is updated with the value of the forgotten node.
Bi-directional search
1) Runs two simultaneous searches. *One forward from initial state. *One backward from goal state. 2) Goal test is replaced to see if frontiers of both searches intersect. If they do, a solution has been found. *Complete: Yes *Optimal Yes *Time complexity: O(b^d/2) =b^d/2 + b^d/2 (Which is much less than b^d) *Space complexity: O(b^d/2) (One frontier must be kept in memory for frontier intersection check)
Iterative-deepening A* (IDA*)
1) Standard iterative deepening, except that the cutoff is the f-cost function g(n) + h(n) instead of the depth. KEY 2) Cutoff value at each iteration (cutoff of dldfs) is the smallest f-cost of any node that exceeded the cutoff on the previous iteration.
Exploration problem
1) Unknown state space *Agent must explore the system to determine what states exist.
Search Based Agent Architecture
1) Update state 2) If action sequence is empty: *Formulate a goal *Formulate the problem *Search for problem solution, and store result 3) If action sequence wasn't empty: *Get next action to execute. *Execute the next action.
Effective branching factor of a heuristic b*
1) Way to characterize the quality of a heuristic. 2) b* is the branching factor that a uniform tree of depth d would have to have in order to contain N + 1 nodes. *Thus: N + 1 = 1 + b* + (b*)^2 + ... + (b*)^d ***Good heuristics have a value of b* close to 1, allowing for large problems to be reasonably solved. (meaning that only the nodes along path d must be kept in memory).
Consistent heuristic
A heuristic that, for every node n and every successor node n', the cost of reaching the coal from n is less than the sum of the cost of moving to n' plus the cost of reaching the goal from n'. *h(n) <= c(n, a, n') + h(n'), where n is current state, n' is state after action a, c(n, a, n') is the step cost of moving from n to n' using action a. *Form of the triangle inequality. *Every consistent heuristic is admissible. (Also known as Monotonicity, or strictly decreasing)
Loopy path
A path that includes a cycle between a set of node states.
Relaxed problem
A problem with fewer restrictions than a previous problem.
Disjoint pattern databases
A record not of the total cost of a sub-problem solution, but just the cost involving sub-components of a sub-problem (pg 107).
Goal
A set of world states representing a solution to some problem.
Open-loop system
A system that ignores current percepts, thus breaking the loop between agent and environment. (Solves problem with its eyes closed) *Search based goal agents are open loop. They ignore the percepts bc they know in advance what they will be.
Uniform-cost search
BFS that selects next node to expand as one with the lowest path cost g(n) from the frontier. *Note, biased to explore large trees of SMALL STEPS rather than small trees with perhaps large but useful steps.
Goal formulation
Based on current situation, and the agent's performance measure.
Node
Book keeping structure used to represent the search tree. *Contains state, as well as other metadata used by the search algorithm.
Total cost
Combination of search cost and path cost of solution found.
State
Corresponds to the current configuration of the world.
Metalevel state space
Each state in a meta-level state space captures the internal (computational) state of a program that is searching in an "object-level state space", such as the city Romania.
Pruning
Eliminating possibilities from consideration without having to examine them.
Heuristic function h(n)
Estimated cost of shortest path from the state at node n to a goal state. *If n is a goal state, then h(n) = 0
Exponential problem with uninformed search
Exponential time complexity search problems cannot be solved by uninformed methods except for very small problem statements.
How to find a heuristic
Find a relaxed version of the problem that can be easily solved. *Iteratively remove restrictions until the simpler version of the problem can be solved easily. *Essential that solutions to relaxed problems can be solved WITHOUT SEARCH.
Dealing with immediate unknown actions
First examine future actions that eventually lead to states of known value.
Search based agent design
Formulate, Search, Execute stages. 1) Formulate a goal and a problem to solve. 2) Search for a sequence of actions to solve the problem. 3) Execute the sequence of actions once they have been found.
Conditions for optimal A* Star search
Heuristic must be: 1) Admissible 2) Consistent
Useful Abstraction
If carrying out each of the actions in the solution is easier than in the original problem.
Valid Abstraction
If we can expand any abstract solution into a solution in the more detailed world.
Expanding the current state
Involves applying the transition function to the current state for each possible action, and storing the resulting states in a set. (Generating new states)
Complete-state problem formulation
Involves trying to solve a problem with operators that ALTER the state description. *8-queens: Board is has all 8 queens placed. Move queens until solution found.
Incremental problem formulation
Involves trying to solve a problem with operators that AUGMENT the state description. *8-queens: Place first queen, then second, then third, and so on.
Canonical form
Logically equivalent states map to the same data structure.
Optimally efficient
No other optimal algorithm is guaranteed to expand fewer nodes than an algorithm. A* Star is optimally efficient.
Toy Problem
Problem intended to exercise or illustrate various problem-solving methods.
Real-world problem
Problem whose solutions people actually care about.
Problem formulation
Process of deciding what actions and states to consider, given a goal.
Search
Process of looking for a sequence of actions that reaches the goal.
Recursive best-first search (RBFS)
Recursive algorithm that mimics operation of standard best-first search, but using only linear space. *Keeps track of f-value (g(n) + h(n)) of the BEST ALTERNATIVE path available from any ancestor of the current node. *If current node exceeds this limit, the recursion unwinds back to the alternative path. *As recursion unwinds, RBFS replaces the f-value of each node along the path with the best f-value of its children. (Aka, it remembers the best leaf in the forgotten subtree, and can re-expand the subtree at a later time) *Space complexity: Linear in depth of the deepest optimal solution.
Best-first search
Search method in which node to select if based off of some evaluation function f(n). *A priority queue with nodes ordered according to f(n) *Most best-first algorithms include a heuristic function denoted h(n)
Uninformed search
Search algorithms that are given no other information other than the problem definition. *BFS, DFS, DLDFS, ITTRDEEP
Informed search
Search algorithms that have guidance as to where to look for goals. *ASTAR, GREEDY/BEST FIRST
Search based goal agents
Search for a sequence of actions that lead to a goal state. *Based on Observable, discrete, known, deterministic environment.
Frontier
Set of all leaf nodes available for expansion at any time. *Separates the state space graph into the explored region and the unexplored region.
Explored set
Set of all nodes that have been previously expanded. *Algorithms that forget their history are doomed to repeat it.
Optimal solution
Solution with optimal path cost among all solutions.
Abstraction
The process of removing detail from a problem.
Search tree
Tree in which: 1) Initial state is the root node. 2) Branches are actions. 3) Nodes are states.
Greedy best-first search
Tries to expand the node that is CLOSEST to the goal. *Uses JUST the HEURISTIC function to evaluate nodes. *f(n) = h(n) *Search cost is minimal, but not optimal. *Complete: No. *Time complexity: O(b^m) *Space complexity: O(b^m)
Learning heuristics
Use machine learning methods to come up with an approximate value of the cost of a state based on problem features. (ANN, decision trees, linear combinations of heuristics)
Example search problems
Vacuum world, 8-puzzle, 8-queens (how to place 8 queens such that none attacks another), Traveling salesman, Route finding, Automated assembling, VLSI layout
Redundant paths
When there is more than one way to get from one state to another.