AI Test Study Guide Part 1
Generate Children 1 by 1 for SMA* expand
1 child at a time to the queue
Hill Climbing search steps
1. Pick a random point in search space 2. Consider all the neighbors of the current state. 3. Choose neighbor with the best quality and move to it 4. Repeat 2-4 until all neighboring states are of lower quality 5. Return the current state as the solution state
Iterative Deepening Search
A depth limited search that gradually increases the limit. Generates states multiple times.
Path cost
A function that assigns a numeric cost to each path. Step cost is the cost of moving from a state by an action to its successor.
heuristic minimization
A number guessing how far it is to a goal and the lower the value, the more likely it is on the direct path to the goal
heuristic maximization
A number indicates how promising the node is to get to a goal state(higher means better)
Uninformed Search
Algorithms that are given no information about the problem other than its definition.
informed search
Algorithms that are given some guidance on where to look for solutions or knows if a state is more promising than another. uses problem-specific knowledge beyond the definition of the problem
Strategy is expanding shallowest unexpanded node and can be implemented using FIFO queue for frontier
Breadth first search
Depth Limited Search
Can handle infinite spaces; Provide predetermined depth limit, when limit is reached, backtrack
Random-restart hill climbing
Conducts a series of hill-climbing searches from randomly generated initial states, until a goal is found.
Recursive implementation is common with ______
DFS
What does hill climbing resemble?
DFS
Depth First Search
Expands the deepest node. Implemented using a LIFO stack. Not optimal. Linear
Uniform cost search
Expands the node n with the lowest path cost (if all step costs are equal, this is identical to a breadth-first search)
• Rational
Exploration, Learning, Autonomy
agent uses goal information to select between possible actions in current state
Goal-oriented agent
For uniform cost search, what is the optimality?
Good
What is optimal for admissible heuristics?
If h(n) is admissible, A star using tree search is optimal
What is the problem with A Star?
It runs out of memory and you have to record all the nodes
A Star uses the same iterative deepening trick as
Iterative Deepening Search
When does hill climbing get stuck when?
Local maxima/minima, ridges, plateau
What two algorithms go along with Memory-Bounded A Star?
Memory Bounded A Star (MBA*) and Simplified MA*(SMA*)
Is hill climbing optimal?
NO
Is GBFS complete?
No, can get stuck in loops
Completeness of DFS?
Not complete or optimal
AI Framework
Perception(real world knowledge on which the machine has to base its decisions), Reasoning(consider perceptions based on a model of the problem world), Action(output results from decisions)
PEAS
Performance measure, Environment, Actuators, Sensors
4 key notions that distinguish agents from arbitrary programs
Reaction to environment, autonomy, goal-orientation, persistence
agent maintains internal state that keeps track of aspects of environment
Reflex-agent with state(model-based)
What is the strategy for BFS?
To find the shallowest goal node. However, shallowest goal node is not necessarily the optimal one if path cost is nondecreasing
A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n
True
• John Searle argued that behaving intelligently was not enough, 1980
True, Chinese Room Test
Is Best Fit search informed search?
Yes
Is IDS complete?
Yes
Cost of an optimal solution to a relaxed problem is what?
admissible heuristic
Unknown Environment
agent will have to learn how it works to make decisions
Best-fit search
algorithm in which a node is selected for expansion based on evaluation function
Best fit search algorithm is
an algorithm in which a node is selected for expansion based on evaluation function f(n)
Rational agent
an entity that perceives and acts, abstractly, a function from precept histories to actions
Performance measure
an objective criterion for success of an agent's behavior and evaluates environments sequence
Recursive best first search uses the f-limit variable to keep track of the f-value of the best alternative path available from any ____ of the current node
ancestor
What is an agent?
anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators
genetic algorithm
artificial intelligence system that mimics the evolutionary, survival-of-the-fittest process to generate increasingly better solutions to a problem
What are time and complexity measured in terms of for BFS?
b-max branching factor and d-depth of the least cost solution. BFS is exponential
As the recursion unwinds, RBFS replaces the f-value of each cost along its path with a ____________, the best f-value of its children
backed up value
"Pure" hill-climbing does not support
backtracking
Why is h2 better for search?
because it is guaranteed to expand less or equal to the number of nodes
RBFS replaces the f-value of each node along the path with a backed up value, which is?
best f-value of its children
BSAT
boolean satisfied problem
How does uniform cost search expand the node with the lowest path cost?
by storing the frontier as a priority queue
Drawbacks of IDA*
cannot avoid revisiting states on the current path, decide the f-limit is not easy, available memory is poorly used
stochastic hill climbing
chooses at random from among uphill moves
Advantages of IDA*
complete and optimal, requires less memory than A*, and avoids the overhead to sort the fringe
What are the advantages IDS has like BFS?
completeness
A* pathfinding is widely used as
conceptual core for path-finding in video games
With Greedy Best First Search(GBFS), f(n) is what
cost from state n to goal
The probability of taking downhill move
decreases with the number of iterations, steepness of downhill move
The agent function
describes what the agent does in all circumstances
Job of AI is to
design the agent program that implements the agent function mapping precepts to actions
goal test
determines whether a given state is a goal state
Hill climbing does not require
differentiable functions
Idea of simulated annealing
escape local maxima by allowing some bad moves but gradually decrease their moves but gradually decrease their frequency
performance measure
evaluates the environment sequence
Time and space complexity for uniform cost search?
exponential
Time complexity of IDS
exponential
Breadth first search is
exponential, complete NOT optimal
Uniform cost search has
f(n) = g(n)
Greedy best first search has
f(n) = h(n)
Formula for A star search algorithm
f(n) estimate of total cost along path = g(n) known/actual cost + h(n) estimated cost
A state consists of a vector of attribute values
factored representation
If the temperature decreases slowly enough, then simulated annealing search will
find a global optimum with probability approaching one
Consistent heuristic is consistent if
for every node n' of n generated by action a
What does a simple agent do first in problem solving?
formulates a goal and problem, searches for a sequence of actions one at a time, and when complete, it formulates another goal and starts over
open list
fringe of unexpanded nodes
first-choice hill climbing
generates successors randomly until one is better than the current
Hill climbing is sometimes called
greedy local search
A best fit search that uses h to selected the next node to expand is called
greedy search
If h2(n) f >= h1(n) for all n both admissible then
h2 dominates h1
An algorithm that quickly produces a good but not necessarily optimal solution
heuristic
What are admissible heuristics?
heuristics that never overestimate the cost to reach the goal(optimistic)
Simulated Annealing
if the temp is sufficiently high to ensure random state and the cooling process is slow enough to ensure thermal equilibrium, then the atoms will place themselves in a pattern that corresponds to the global energy minimum of a perfect crystal
In IDA*, where do we initialize the cutoff to?
initial node
an agent that is capable of flexible autonomous action in order to meet its design objective
intelligent agent
If h(n) is consistent, A* using GRAPH-SEARCH is optimal because
it keeps all checked nodes in memory to avoid repeated states
SMA* is optimal if the allowed memory is high enough to store the optimal solution, otherwise
it will return the best solution that fits in the allowed memory
What does greedy best first search do with the nodes?
keeps all nodes in memory
What are the advantages IDS has like DFS
limited space
Recursive beast search uses what kind of space?
linear
Space complexity of IDS
linear
Depth Limited Search space complexity
linear space
Space complexity of DFS
linear space
A star search algorithm is a best first search that aims at
minimizing the total cost a long a path from start to goal
Advantage of hill climbing
only ever have to store one state. Cycles must mean e decreases, which cannot happen
Disadvantages of RBFS
only linear space used because trying to use too little memory
Advantage of Bidirectional search
only needs to go half-depth
lowest path cost is
optimal
What is the implementation of best fit search?
order nodes in fringe(frontier) increasing order of estimated cost
Known enviroment
outcomes for all actions are given
Uniform cost search expands node n with the lowest ___ ___
path cost
If you cannot improve e in the hill climbing algorithm, then
perform a random restart
What kind of examination does DFS do when expanding deepest unexpanded node?
pre-order
closed list
previously expanded nodes
What can a best-first search be implemented with?
priority queue best one lines up in front of queue
informed search strategy uses?
problem specific knowledge beyond the definition of the problem itself. uses problem-specific knowledge beyond the definition of the problem
Relaxed problems
problem with fewer restrictions on the actions
Genetic algorithm: Crossover decomposes two distinct solutions and then
randomly mixes their parts to form novel solutions
An agent's flexibility means three things
reactivity, pro-activeness, social ability
What is the idea of the iterative deepening A Star(IDA*)?
reduce memory requirement by applying cutoff on values of n
SMA* Algorithm optimizes A* to work with
reduced memory
transition model
returns resulting state and describes what each action does
An agent's ________________ give it access to the complete environment at each point in time
sensors
+Successor function:
set of action-state pairs
State space =
set of complete configurations
For 8 puzzle if the rules are relaxed so that a tile can be moved to any adjacent square then h2(n) give the
shortest solution
agent selects from precepts, ignoring all past precepts
simple reflex agent
4 basic types of agents
simple reflex, reflex agents with states(modl-based), goal-oriented, utility-based
Hill climbing technique
specify an evaluation function e, randomly choose a state, only choose actions which improve e
A _____ is a physical representation of physical configuration
state
Advantages of RBFS
still complete and optimal, don't need to set f-limit, requires less memory than A*
heuristic function for finding route-finding problems to the goal
straight-line distance
If environment is deterministic except for actions of other agents, environment is _________________
strategic
A state includes objects, each of which has its own attributes as well as relationships to other agents
structured representation
what does doing the right thing mean in terms of AI?
that which is expected to maximize results given available information
SMA * is complete if
the allowed memory is high enough to store the optimal solution
What inspired the genetic algorithm?
the biological evolution process
Hill climbing algorithms are also called gradient descent if
the evaluation function represents cost
Four categories of AI
thinking humanly, acting humanly, acting rationally, thinking rationally
2 main ingredients in AI
thought process and reasoning(thinking) and behavior and performance(acting)
What is the idea of bidirectional search?
to run concurrent searches - one forward from initial state and other backward from the goal, stopping when the two searches meet in the middle
Strategy is picking order of node expansion
tree search strategy
A heuristic is globally optimistic or admissible if the estimated cost of reaching a goal is always less than actual cost
true
Uniform cost search can get stuck in an infinite loop
true
Theroem: If h(n) is consistent, A Star using Graph Search is optimal
true because it keeps all checked nodes in memory to avoid repeated states
DFS, BFS, and greedy hill-climbing are
uninformed
Memory Bounded A Star's approach is to do what?
use all available memory
Best-fit search
uses an evaluation function f(n) where each node where h(n) provides an estimate for total cost
Recursive best-first search
uses f-limit variable to keep track of best alternative path from any ancestor of the current node. Replaces f-value with best f-value of nodes children.
what does an informed search strategy do?
uses problem specific knowledge beyond the definition of the problem itself
Agent uses utility function to evaluate desirability of states that can result from each action
utility-based agent
Initial state
what the agent starts in
Genetic algorithm: mutation
• randomly perturbs a candidate solution