Artificial Intelligence Searching Algorithms

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Greedy Best First Search

Uses a heuristic function, h(n), as the EVAL-FN • h(n) estimates the cost of the best path from state n to a goal state - h(goal) = 0 • Greedy search - always expand the node that appears to be the closest to the goal (i.e., with the smallest h) - Instant gratification, hence "greedy" Complete No Optimal No Time Complexity Exponential: O(b^(m)) Space Complexity: Exponential: O(b^(m))

General Searching Algorithm

Uses a queue (a list) and a queuing function to implement a search strategy

A* Search

"A* Search" is A Search with an admissible h - h is optimistic - it never overestimates the cost to the goal h(n) <= true cost to reach the goal - So f(n) never overestimates the actual cost of the best solution passing through node n Complete Yes Optimal Yes Time Complexity: Exponential: better in some conditions Space Complexity: Exponential: keeps all nodes in memory

Genetic Algorithm

A successor state is generated by combining two parent states • Start with k randomly generated states (population) • A state is represented as a string over a finite alphabet (often a string of 0s and 1s) • Evaluation function (fitness function). Higher values for better states. • Produce the next generation of states by selection, crossover, and mutation

Breadth First Search

All nodes at depth d in the search tree are expanded before any nodes at depth d+1 - First consider all paths of length N, then all paths of length N+1, etc. Shortest Path FIFO queue Complete yes Optimal If shallowest is optimal Time Complexity Exponential: O(b^d) Space Complexity: Exponential: O(b^d)

Optimality

Does it find the "best" solution (if there are more than one)?

Completeness

Is it guaranteed to find a solution (if one exists)?

Depth Limited Search

Like depth-first search, but uses a depth cutoff to avoid long (possibly infinite), unfruitful paths - Do depth-first search up to depth limit l - Depth-first is special case with limit = inf • Problem: How to choose the depth limit l ? Complete No, unless d <= l Optimal No Time Complexity Exponential: O(b^l) Space Complexity: Exponential: O(bl) l is depth limit

Time complexity

Number of nodes generated/expanded (How long does it take to find a solution?)

State

Represents a (possibly physical) configuration

A search

Uniform-cost search minimizes g(n) ("past" cost) • Greedy search minimizes h(n) ("expected" or "future" cost) • "A Search" combines the two: - Minimize f(n) = g(n) + h(n) - Accounts for the "past" and the "future" - Estimates the cheapest solution (complete path) through node n

Best-First Search

f(n) = h(n) Complete No Optimal No, path may not be found Time Complexity Exponential: O(b^(m)) Space Complexity: Exponential: O(b(m)) m is maximum depth

Tree Search

- We might repeat some states - But we do not need to remember states

Graph Search

- We remember all the states that have been explored - But we do not repeat some states

Search Tree Node

A data structure which includes: a data structure which includes: { parent, children, depth, path cost }

Depth First Search

Always expands one of the nodes at the deepest level of the tree - Low memory requirements - Problem: depth could be infinite • Uses a stack (LIFO) Complete No Optimal No Time Complexity Exponential: O(b^m) Space Complexity: Exponential: O(bm) m is maximum depth

Informed (heuristic) search

Can evaluate states

Uninformed (blind) search

Can only distinguish goal state from non-goal state No information is available other than - The current state - The goal test - The current path cost (cost from start state to current state) Breadth-first search - Uniform cost search - Depth-first search - Depth-limited search - Iterative deepening search - Bidirectional search

Simulated Annealing

For the current state, evaluate the operators/actions • Instead of taking the best action, choose an action at random - If that action results in a higher "goodness" value (delta E > 0), take it - Otherwise, take it with probability between 0 and 1 SA is an example of a Monte Carlo method - A technique that uses probabilistic-based calculations to find an approximate solution to a complex (perhaps deterministic) problem

Space complexity

How much memory does it require?

IDA* and SMA* are designed to conserve memory

IDA* and SMA* are designed to conserve memory

Iterative Deepening A* (IDA*)

IDA* is an optimal, memory-bounded, heuristic search algorithm - Requires space proportional to the longest path that it explores - Space estimate: O(bd) • Similar to Iterative Deepening Search - Uses f-cost limit rather than depth-limit - In IDS, depth-limit is incremented after each round - In IDA*, f-cost limit is updated after each round Limit based on the smallest f-cost of a pruned node from the previous iteration

Simplified Memory-Bounded A* (SMA*)

IDA* only keeps around the current f-cost limit - Can check the current path for repeated states, but future paths may repeat states already expanded - Inefficient for small ε step costs • SMA* uses more memory to keep track of repeated states - Up to the limit of allocated memory - Nodes with high f-cost are dropped from the queue when memory is filled ("forgotten nodes") • Optimality and completeness depends on how much memory is available with respect to the optimal solution - SMA* is complete and optimal if there is any reachable solution given the current memory limit

Local beam search

In simulated annealing, how many nodes do you keep in memory? - One • Local beam search keeps track of k states rather than just one - Start with population of k (initially random?) candidates - Choose k best descendents

Uniform Cost Search

Similar to breadth-first search, but always expands the lowest-cost node, as measured by the path cost function, g(n) - g(n) is (actual) cost of getting to node n - Breadth-first search is actually a special case of uniform cost search, where g(n) = DEPTH(n) - If the path cost is monotonically increasing, uniform cost search will find the optimal solution C = optimal cost Complete Yes if e = minimum step cost > 0 Optimal If minimum step cost > 0 Time Complexity Exponential: O(b^(C/e)) Space Complexity: Exponential: O(b^(C/e))

Bidirectional Search

Simultaneously search forward from the initial state and backward from the goal state Complete Yes Optimal If shallowest is optimal Time Complexity Exponential: O(b^(d/2)) Space Complexity: Exponential: O(b(d/2))

Iterative-Deepening Search

Since the depth limit is difficult to choose in depth-limited search, use depth limits of l = 0, 1, 2, 3, ... - Do depth-limited search at each level IDS has advantages of - Breadth-first search - similar optimality and completeness guarantees - Depth-first search - Modest memory requirements Complete Yes Optimal If shallowest is optimal Time Complexity Exponential: O(b^d) Space Complexity: Exponential: O(bd)

Hill Climbing, a.k.a. Gradient Descent

Strategy: Move in the direction of increasing value (decreasing cost) - Assumes a reasonable evaluation method!!! • Does not maintain a search tree - Evaluates the successor states, and keeps only the best one - Greedy strategy • Drawbacks - Local maxima - Plateaus and ridges • Can randomize (re-)starting locations and local strategies when stuck - "Random restart hill-climbing" - But how to know when you're stuck?


संबंधित स्टडी सेट्स

AP Computer Science A - Using Objects (Unit 2) Test Review

View Set

Med-Surg Chapters 56,57, & 58 GI

View Set

Chapter 5- Design of Goods and Services

View Set