AI Search
State Space
The set of states a process can be in, the set of states in a problem
minimum remaining values (MRV) heuristic
for every unassigned variable X, the number of legitimate values number of values for which assignment to X will not cause immediate violation the next variable chosen has smallest set of legitimate values
Unit Clause Heuristic
immediately assigns a truth value to a propositional symbol appearing as the only literal in a clause so as to make that clause evaluate to T
commutativity (for a CSP)
in order to come up with an assignment, it does not matter in which order we choose to assign values to variables (we have choice)
Artificial Intelligence
'AI is the study of how to make computers do things at which, at the moment, people are better'
defenition of a constraint satisfaction problem
-variables X1, X2, . . . , Xn, with non-empty domain of values Di -constraints C1, C2, . . . , Cm, Cj involves some subset of variables, the scope. specifies the allowable combinations of values for scope variables Cj is usually given as a relation of k-tuples from X1 * X2 * . . . * Xk detailing that the ith element of some tuple of Cj is a value from the domain of Xi an assignment associates a value from Di to the variable Xi, for some or all of the variables
Definition of a Search Problem (Local Search)
1. Initial state 2. Descriptions of states reachable from any given state via the transition function 3. An objective function giving the value associated with a state
Definition of a Search Problem (Tree Search)
1. Initial state 2. Transition function 3. Goal test 4. Non-negative step cost function
Solution to a Search Problem (Tree Search)
A path in the search tree from the initial state to the goal state
Optimal Solution to a Search Problem (Tree Search)
A solution with the lowest path cost (path cost is the sum of all step costs along the path)
Heuristic/Informed Search
Additional information pertinent to the problem in hand is known (or constructed) and used.
Uninformed/Blind Search
Agents have no additional information beyond that provided in the problem definition.
Complete Algorithm
Always finds a solution if one exists
Optimal Algorithm
Always finds the optimal solution
Solution to a Search Problem (Local Search)
Any state
Agent
Anything perceiving its envirnmment through sensors and acting upon that envirenment through actuators. • architecture + program
Genetic Algorithm
At each iteration: • Randomly select two individuals X and Y from the current population P so that 'fitter' individuals are more likely to be selected -from X and Y, reproduce a child Z through crossover - Small probability to mutate the child Z - Add Z to the new population newP • After iteration the new population newP has same size as P • Replace P with newP and start next iteration
Evaluation Function
Choice of evaluation function determines type of best-first search - f(z) = depth of a node -> breadth-first search - f(z) = 1 / (depth of a node) -> depth-first search. • Choice of evaluation function also determines the termination condition • Usually depends on a heuristic function h(z)
Percept Sequence
Complete history of everything the agent has ever perceived.
Performance Measure
Criterion for the success of an agents behaviour. • External to agent • Design for what we want from the environment rather than how we want the agent to behave • Perform appropriate actions in order to achieve future percept information
Strategic Environment
Environment is deterministic apart from other sgents
Rational Agent
For every possible percept sequence, the agent should select an action that is expected to maximise (on average) its performance measure.
Utility-Based Agent
Goal-Based Agent with a utility function to evaluate a state (or sequence of states).
Transition Rules
How to move from state to state
TSP
INSTANCE • A collection of cities • The distances between them • A start city AIM • To discover a tour of the cities • Starting and ending at the start city • Every other city is visited exactly once on the tour • The length of the tour is minimal (from amongst all possible tours)
Agent Program
Implementation of agent function, running in agent architecture.
Hillclimbing
Iteratively move to a better successor state whose (objective function f) value is higher until no such successor state exists when the algorithm terminates.
Simulated Annealing
Iteratively performs a local search from the current state X until the temperature drops to 0 at which point the algorithm halts. • Successor is chosen based on the objective function, but there is a chance (based on the temperature) that a random state will be chosen
Model-Based Reflex Agent
Maintains internal state of environment (which can be evolving) • Input: percept • Update internal state using percept • Match rule with percept • Perform action based on rule
Agent Function
Mapping any given percept sequence to an action.
Branching Factor
Maximum number of successors of any state (children of a search tree node)
Goal-Based Agent
Model-Based Reflex Agent with a goal to work towards. Chooses the action which best fits its goal from a set of possible actikns • More flexible
Goal-Based Agent
Model-Based Reflex Agent with a goal to work towards. Chooses the action which best fits its goal from a set of possible actions • More flexible
Time complexity of TSP
NP-complete
PEAS
Performance Environment Actuators Sensors
Search Tree
Root is our initial state, modes in the tree are the states we visit in our Tree Search algorithm.
Davis-Putnam
SAT solving algorithm if some clause C evaluates to T then we may as well delete C from our formula before proceeding. If some clause C evaluates to F then we can stop the algorithm
Simple-Reflex Agent
Selects actions solely based on current percept • Input: percept • Match rule with percept • Perform action based on rule
Percept
Sensory input to an agent
Optimal Solution to a Search Problem (Local Search)
State whose objective value is minimum/maximum (as appropriate) over all states
Abstraction
We separate out important features and variations from the many unimportant ones.
backtracking search
a depth-first search with respect to chosen orderings of both variables and values
truth-table-entails
a simple implementation of the exhaustive model-checking procedure encompassed by the notion of entailment
Arc consistency
an arc (X, Y) of the constraint graph is consistent if the set of legitimate values for X is non-empty for every legitimate value u for X, there is at least one legitimate value v for Y
Least constraining value heuristic
assists with the selection of a value to an already chosen variable X any choice of value for X will, in general, rule out certain values for other variables (from their set of legitimate values) the sum total of all such ruled-out values is calculated a value for X for which this total is minimal is chosen
Dominating Heuristic
h1 dominates h2 if for every node z, h1(z) >= h2(z)
pure symbol heuristic
looks for symbols that always appear either positively or negatively in every clause and assigns values accordingly
constraint propagation
propagating knowledge about currently assigned variables to currently unassigned variables by restricting the domains of values of currently unassigned variables
degree heuristic
select the unassigned variable X that is involved in constraints with the largest number of other unassigned variables
constraint graph
there is a vertex X for every variable X two vertices are joined by an edge if the two variables to which they correspond appear in the scope of some constraint
forward checking
we maintain and amend throughout the execution the sets of legitimate values for each unassigned variable when we have assigned the value v to X, we remove all values from the sets of legitimate values for all the unassigned variables that are immediately ruled out because of the assignment of v to X
Crossover
• A random bit position in the strings X and Y is chosen so that both strings are partitioned into a prefix and a suffix • The suffix of Y is then concatenated onto the prefix of X, and the suffix of X is concatenated onto the prefix of Y to get two children • The fittest of the two children so obtained is taken to be the child Z
Critique of Simulated Annealing
• Better at getting out of local maxima/minima that Hillclimbing • Random selection improves exploring
Critique of Tree Search
• Can be proven to be optimally efficient (A* Search) • Uses a lot of memory
Critic of A* Search
• Complete - will terminal • Optimal - will find optimal solution • Need a good choice of heuristic function • Can be optimally efficient
Evaluation of a Search Strategy
• Completeness: Is the algorithm guaranteed to find a solution if there is one? • Optimality: Does the strategy find an optimal solution? • Time complexity: How long does it take to find a solution? • Space complexity: How much memory is needed to perform the search?
Critique of Hillclimbing
• Doesn't look beyond the immediate neighbourhood • Can get stuck in local maxima or minima
Definition of A* Search
• Evaluation function is f(z) = h(z) + g(z) • h(z) is the heuristic cost to get from the node to a goal • g(z) is the cost to reach the node • Terminates when a goal node is the minimal node on the fringe
Best First Search
• Expand a fringe node with a minimal value according to a given evaluation function • Put children on the fringe and repeat • Generalised BFS/DFS with a heuristic function
Consistent Heuristic
• For every node z in the search tree and for every successor node z' of z, the step-cost c of the transition from z to z' is such that h(z) <= c + h(z'). • On any path from the root to a search tree node, the f-value of the nodes is non-decreasing • Always admissiblr
Task Environment Bases
• Fully vs Partially observable • Deterministic (next env state only determined by current state) vs Stochastic • Episodic vs Sequential (current decision could affect all future decisions) • Static vs Dynamic (env can change while agent deliberates) • Discrete vs Continuous • Single-agent vs Multi-agent
What Search Method to Use (Tree vs Local)
• If only the goal state is of interest then Local Search • If the path of actions to a goal are of interest then Tree Search
TSP Transition Rules
• If we are in some state <cs, c2, ..., ci>, where i < n, then we can move to state <cs, c2, ..., ci, ci+1> iff ci+1 does not appear on the list cs, c2, ..., ci • If we are some state <cs, c2, ..., cn> then we can move to state <cs, c2, ..., cn, cs>
Critique of DFS
• Uses little memory • Can get stuck in infinite cycles • Not complete: will not always terminate/find a solution if there is one • Memory required is O(bd), where b is the branching factor and d is the maximum depth of any node in the search tree • Exponential time complexity
Space Comparison (Tree vs Local Search)
• Local search algorithms usually use less memory as only the current state needs to be remembered • Tree search needs to remember the whole path in the search tree
Critique of Local Search
• More efficient in terms of size • Gives reasonable solutions I'm very large state spaces • Will not guarantee optimal solutions
Critique of Greedy Best First Search
• Not guaranteed to give optimum solution • Not guaranteed to terminate • Space/Time complexity is O(bd), where b is the branching factor and d is the maximum depth of any node in the search tree
Definition of Breadth First Search
• State representation: (id, state, parent_id, action, path_cost, depth) • The transition function, the initial state, the goal test and so on all come as part of the problem specification. • Uses a QUEUE
Definition of Depth First Search
• State representation: (id, state, parent_id, action, path_cost, depth) • The transition function, the initial state, the goal test and so on all come as part of the problem specification. • Uses a STACK
Greedy Best First Search
• The evaluation function equals the heuristic function; that is, f(z) = h(z) • Termination when a goal node appears on the fringe
Search Tree vs State Space
• The search tree can be seen as the "unrolling" of the state space • The search tree contains a subset of all states in the state space
Admissible Heuristic
• The value h(z) of any node z in the search tree is always at most the cost of a minimal cost path to a goal node • In "geographic" problems, note that the straight-line distance between two locations is an admissible heuristic
Critique of BFS
• Uses a lot of memory • Complete: guaranteed to find terminate/find a solution if there is one • Exponential time and space complexity
Heuristic Function
• h(z) > 0, is the estimated cost of the cheapest path in the search tree from node z to any goal node • If z is a goal node then necessarily h(z) = 0 • h(z) only depends upon the state corresponding to the node z