AI
model checking (in logic)
(to check if KB |= α) enumerates all possible models to check that α is true in all models in which KB is true, that is, that M(KB) ⊆ M(α).
Pros and cons of local search
- Local search algorithms are not systematic. +Useful when the path doesn't matter + They use very little memory—usually a constant amount. + They can often find reasonable solutions in large state spaces for which systematic algorithms are unsuitable.
admissible heuristic
An admissible heuristic never overestimates the cost to reach the goal.
Minimax
Choose action with the highest minimax value—to receive best possible payoff against best play
A game can be formally defined as a kind of search problem with the following elements
Initial state: How the game starts — Player(s): Whose turn is it? — Actions: Set of legal moves — Transition model — Terminal test: Is the game over? — Utility(s, p): Final numeric value of a game that ends in state s for player p
Minimax algorithm
Minimax = Utility(s) if s is a terminal state maxa∈Actions(s) Minimax(Result(s, a)) if Player(s)=Player 1 mina∈Actions(s) Minimax(Result(s, a)) if Player(s)=Player 2
Adversarial search problems
Multiple agents with conflicting goals give rise to adversarial search problems—also known as games.
Is depth first search optimal?
No
Is greedy best first search optimal?
No
Is Hill-climbing search complete?
No because it can get stuck at a local maximum Ridges (series of local maxima) are hard to navigate. and it could get stuck on a plateau (flat maxima or shoulder) so should restrict #of sideways steps allowed
Is greedy best first search complete?
No, it can get into loops. (Only complete with graph search (storing visited nodes)
Time and space complexity of uniform cost search
O (b 1+ C*/ϵ ))
Space and time complexity of bidirectional search
O(b^(d/2))
Space complexity of breadth first search
O(b^d)
Time complexity of IDS
O(b^d)
Time complexity of breadth first search
O(b^d)
Time complexity of depth first search
O(b^m)
Time and space complexity of greedy best first search
O(b^m), for tree search version. But a good heuristic can reduce it substantially
time complexity of minimax
O(b^m), where m=maximum depth of the tree, b=branching factor (like depth first)
Space complexity of IDS
O(bd)
Space complexity of depth first search
O(bm) (this is the main advantage, because it only has to store one path from root to leaf)
Space complexity of minimax
O(bm) like depth first
Is breadth first search optimal?
Only if path cost is a nondecreasing function of node depth, for example, if all actions have identical cost.
Arc consistency algorithm (constraint propagation)
Pop an arbitrary arc i —> j from the set. Make the domain of node j consistent with the domain of node i. If domain of j changes, add to set all arcs j —> k.
forward checking
Whenever a variable X is assigned, the forward-checking process establishes arc consistency for it: for each unassigned variable Y that is connected to X by a constraint, delete from Y 's domain any value that is inconsistent with the value chosen for X
Is A* complete?
Yes
Is breadth first search complete?
Yes if b and d are finite.
Is bidirectional search complete?
Yes if both directions use breadth-first search and b is finite.
Is IDS complete?
Yes if path cost is a nondecreasing function of node depth, for example, if all actions have identical cost.
Is depth first search complete?
Yes if we use the graph-search version, which avoids repeated states and redundant paths, and the state spaces is finite, otherwise no.
Is minimax complete?
Yes, if game tree is finite
Is minimax optimal
Yes, only against an optimal opponent.
UNIFORM-COST SEARCH Complete?
Yes, unless there is a path with an infinite sequence of zero-cost actions
Describe the basic structure all search algorithms share
You have a root which is the initial state, you expand the root by drawing branches out of the root that represent all possible actions, the branches go into the states that are the result of the actions. The leaf nodes are called the frontier. You expand nodes in the frontier until a solution is found or you run out of nodes to expand. Which nodes to expand and when depends on the search algorithm
what is a model (in logic)
a 'possible world' e.g. in arithmetic all the different assignments of a variable might be different models.
model in propositional logic
a model simply fixes the truth value—true or false—for ev ery proposition symbol
Difference between a ply and a move
a move is when each player has their go each individual go is a ply
knowledge base
a set of sentences that declare the facts about the world (declarative approach)
What does optimal mean for adversarial search
a solution is optimal if it leads to outcomes at least as good as any other strategy when playing against a perfect opponent.
game tree
a tree where the nodes are game states and the edges are moves
Genetic algorithms
a variant of stochastic beam search in which successor states are generated by combining two parent states rather than by modifying a single state begin with a set of k randomly generated states, called the population. Each state, or individual, is represented as a string over a finite alphabet each state is rated by the objective function, or (in GA terminology) the fitness function two pairs are selected at random for reproduction, in accordance with the prob For each pair to be mated, a crossover point is chosen randomly from the positions in the string the offspring themselves are created by crossing over the parent strings at the crossover point each location is subject to random mutation with a small independent probability. Look at figure 4.6
every consistent heuristic is also...
admissible
Evaluation function (in adversarial search)
alpha-beta still needs to search all the way to some terminal states. so to combat this we introduce an evaluation function which gives an estimate of the expected utility of the game from a given position. it will be in the form: V(s) = w1 f1(s)+w2 f2 (s)+...+wn fn (s) where w are weight and f are features.
Complete inference algorithm
an inference algorithm is complete if it can derive any sentence that is entailed.
Syntax of propositional logic
atomic sentences consist of a single proposition symbol Each such symbol stands for a proposition that canbe true or false True is the always-true proposition and False is the always-false proposition Complex sentences are constructed from simpler sentences, using parentheses and logical connectives.
What factors affect the complexity of an algorithm?
b, The brancing factor (max number of successors) d, depth of the shallowest goal node m, the maximum length of any path in the state space.
Motivation behind bidirectional search
b^(d/2) + b^(d/2) << b^d
Stochastic beam search
chooses the k successors randomly, biased towards good ones.
minimum remaining-values (MRV)heuristic (In backtracking search for the question of which variable should we expand next)
choosing thevariable with the fewest"legal" values
How do you define a constraint satisfaction problem
consists of three components, X,D, and C: X is a set of variables, {X1,...,Xn}. D is a set of domains, {D1,...,Dn}, one for each variable. C is a set of constraints that specify allowable combinations of values.
Semantics of a logic
defines the truth of each sentence with respect to each possible world
Backtracking search
depth-first search — chooses values for one variable at a time — backtracks when a variable has no legal values left to assign you only
Advantages and disadvantages of iterative deepening search
duplicates work but has modest space complexity O(bd) and is complete. (combines advantages of BFS and DFS) Ierative deepening search may seem wasteful because states are generated multiple times. It turns out this is not too costly. The reason is that in a search tree with the same (or nearly the same) branching factor at each level, most of the nodes are in the bottom level, so it does not matter much that the upper levels are generated multiple times. In an iterative deepening search, the nodes on the bottom level (depth d) are generated once, those on the next-to-bottom level are generated twice, and so on, up to the children of the root, which are generated d times. So the total number of nodes generated in the worst case is N(IDS)=( d)b +(d − 1)b2 + ···+ (1) bd , which gives a time complexity of O(bd)—asymptotically the same as breadth-first search In general, iterative deepening search is the preferred uninformed search method when the state space is large and the depth of the solution is unknown."
Effective Branching Factor
effective branching factor b∗. If the total number of nodes generated by A∗ for a particular problem is N and the solution depth is d, thenb∗ is the branching factor that a uniform tree of depth d would have to have in order to contain N +1nodes
Why does dominance translate into efficiency
every node with f(n) <C ∗ (optimal solution cosy) will surely be expanded. This is the same as saying that every node with h(n) <C ∗ − g(n) will surely be expanded. But because h2 is at least as big as h1 for all nodes, every node that is surely expanded by A∗ search with h2 will also surely be expanded with h1, andh1 might cause other nodes to be expanded as well. Hence, it is generally better to use a heuristic function with higher values, provided it is consistent and that the computation time for the heuristic is not too long.
What does it mean for a heuristic function to dominate another heuristic function
for any node n, h2(n) ≥ h1(n). We thus say that h2 dominates h1 Domination translates directly into efficiency
Explain the "formulate, search, execute" design
formulate the problem by specifying the situation and the goal Use a search algorithm to find a solution Execute the steps recommended by the solution
heuristic function
h(n)= estimated cost of the cheapest path from the state at node n to a goal state. if n is a goal node, then h(n)=0.
open-loop system
where, while the agent is executing the solution sequence it ignores its percepts when choosing an action because it knows in advance what they will be. An agent that carries out its plans with its eyes closed, so to speak, must be quite certain of what is going OPEN-LOOP on. Control theorists call this an open-loop system, because ignoring the percepts breaks the loop between agent and environment
Is uniform cost search optimal?
yes
Is bidirectional search optimal?
yes if both directions use breadth-first search and step costs are identical.
KB |- i α means what? (sidways⊥ and subscript i)
α is derived from KB by i
Entailment
α |= β if and only if, in every model in which α is true, β is also true. Using the notation just introduced, we can write α |= β if and only if M(α) ⊆ M(β) M(α) denotes the set of models of α (where α is true)
Time and space complexity of A*?
For most problems, the number of states A* visits is still exponential in the length of the solution. It usually runs out of space long before it runs out of time.
Hill climb search
Generate successors of current solution. Pick the best successor. Repeat. Until no neighbour has better value than the current solution.
Least constraining value (In backtracking search for the question of In what order should the values of the variable be tried?)
Given a variable, choose the value that constrains the other unassigned variables the least.
How to deal with repeated states in the tree search algorithm
when you expand a node, add it to the explored set and only add nodes to the frontier if its not in the explored set. This is called graph search
What is a logic
A formal language for representing information such that valid conclusions can be drawn.
consistent heuristic
A heuristic h(n) is consistent if, for every node n and every successor n' of n generated by any action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n' plus the estimated cost of reaching the goal from n': h(n) ≤ c(n,a,n')+h(n') . This is a form of the general triangle inequality
Simulated annealing
A hill-climbing algorithm that never makes "downhill" moves toward states with lower value (or higher cost) is guaranteed to be incomplete, a random walk would be very efficient. Simulated annealing is the happy medium: Instead of picking the best move, however, it picks a random move. If the move improves the situation, it is always accepted. Otherwise, the algorithm accepts the move with some probability less than 1. The probability decreases exponentially with the "badness" of the move—the amount ΔE by which the evaluation is worsened.
Random-restart hill climbing
A series of hill-climbing searches from randomly generated initial states, until a goal is found. Probability of finding the goal tends to 1 with the number of restarts.
We can evaluate an algorithm's performance in four ways:
Completeness: Is the algorithm guaranteed to find a solution when there is one? Optimality: Does the strategy find the optimal solution?Time complexity: How long does it take to find a solution? Space complexity: How much memory is needed to perform the search?
How should you choose a heuristic function if you have a few options but none dominate the others
Define h(n) = max{h1(n), ..., hm(n)}.
Describe depth first search
Depth-first search always expands the deepest node in the current frontier of the search tree The search proceeds immediately to the deepest level of the search tree, where the nodes have no successors.
Whats is A* search
Expand the node that has the lowest g(n) + h(n) f(n)=g(n)+h(n) f(n)= estimated cost of the cheapest solution through n.
What is breadth first search?
Expand the shallowest nodes first. root node is expanded first, then all thesuccessors of the root node are expanded next, then their successors, and so on
What is iterative deepening search?
Run DFS to a fixed depth z. • Start at z=1. If no solution, increment z and rerun
A problem can be defined formally by five components:
The initial state that the agent starts in A description of the possible actions available to the agent. Given a particular state s, ACTIONS(s) returns the set of actions that can be executed in s. We say that each of these actions is applicable in s. A description of what each action does; the formal name for this is the transition model, specified by a function RESULT(s,a) that returns the state that results from doing action a in state s. We also use the term successor to refer to any state reachable from a given state by a single action Together, the initial state, actions, and transition model implicitly define the state space of the problem—the set of all states reachable from the initial state by any sequence of actions. The state space forms a directed network or graph in which the nodes are states and the links between nodes are actions. The goal test, which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goal states, and the test simply checks whether the given state is one of them A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflects its own performance measure Initial state Actions Transition model Goal test Path cost I Am Tom, Go Poo! I always trust GP
Local beam search
The local beam search algorithm keeps track of k states rather than just one. It begins with k randomly generated states. At each step, all the successors of all k states are generated. If any one is a goal, the algorithm halts. Otherwise, it selects the k best successors from the complete list and repeats.
constraint satisfaction problem (idea behind it)
The main idea is to eliminate large portions of the search space all at once by identifying variable/value combinations that violate the constraints. Problem "state" is not an indivisible unit. It is a set of variables: "factored state representation" The objective is to assign values to this set of variables subject to a set of constraints. Could be worth looking at the texbook more on this section because she doesn't explain a lot of it.
Abstraction
The process of removing unnecessary detail from a representation of a problem.
Why doesn't adversarial search give just a sequence of actions.
There is an unpredictable opponent. So the solution needs to specify a move for every possible opponent response It's more like a policy.
Is A* optimal?
Tree search version is optimal with an admissible heuristic. Graph search version is optimal with a consistent heuristic. A* is optimally efficient for any given consistent heuristic. No other optimal algorithm is guaranteed to expand fewer nodes than A*.
Semantics of propositional logic
how to compute the truth value of any sentence given a model True is true in every model and False is false in every model. • The truth value of every other proposition symbol must be specified directly in the model. For example, in the model m1 given earlier, P1,2 is false. For complex sentences, we have five rules, which hold for any subsentences P and Q in any model m (here "iff" means "if and only if"): •¬ P is true iff P is false in m. • P ∧ Q is true iff both P and Q are true in m. • P ∨ Q is true iff either P or Q is true in m. • P ⇒ Q is true unless P is true and Q is false in m. • P ⇔ Q is true iff P and Q are both true or both false in m.
alpha-beta pruning
if you keep track of the Value of the best choice found so far for Max and the Value of the best choice found so far for Min there may be some nodes you can skip for more detail look at first recorded lecture at at 28 mins it does not impact the solution found time complexity reduces to O(b^(m/2)) with perfect ordering.
examples of constraint satisfaction problems
map colouring soduku job-shop scheduling
Table look up
memorise what happens at the end of the game. Below a certain branch, compute everything exactly and store it in a table. for instance chess with 5 or less pieces has been solved so you could have a lookup table for that
what is a 'sound' or 'truth preserving' algorithm
n inference algorithm that derives only entailed sentences is called sound or truth preserving.
What is bidirectional search
run two simultaneous searches—one forward from the initial state and the other backward from the goal and replacing the goal test with a check to see whether the frontiers of the two searches intersect
How do you solve constraint satisfaction problems
search and constraint propagation (e.g. backtracking search and arc consistency algorithm)
degree heuristic (In backtracking search for the question of which variable should we expand next)
selecting the variable that is involved in the largest number of constraints on other unassigned variables
Agent
something that acts
What 6 components would each node of a search tree have if it were a data structure
state action parent children depth path cost
What is the syntax of a logic
syntax defines how to form sentences
local search for constraint satisfaction problems
the initial state assigns a value to every variable search changes the value of one variable at a time works well with the min conflicts heuristic
What is space measured in terms of in search algorithms?
the maximum number of nodes stored in memory.
What is time measured in terms of in search algorithms?
the number of nodes generated during the search
Zero-sum game
total payoff to all players is the same for every instance of the game.
What is greedy best first search
tries to expand the node that is closest to the goal. (using the heuristic function as the estimate of the cost)
Describe uniform cost search
uniform-cost searchUNIFORM-COST SEARCH expands the node n with the lowest path cost g(n). The first is that the goal test is applied to a node when it is selected for expansion rather than when it is first generated. (In case a more optimal path is available).