CS 482 - Artificial Intelligence!
What is the Turing Test?
(1950) In response to the question "can machines think?", Alan Turing said that if a person were placed in a room where they could communicate with, but not see, both a machine and a person and could only guess which was the person and which the machine 50% of the time, then we should regard that machine as a thinking thing, an AI. The experiment should be repeated to a statistically significant degree.
Define: Informed Search
Search involves knowledge about the specific problem (contrasted with blind-search)
What is artificial intelligence?
The study of methods for accomplishing natural tasks that require human intelligence
Describe the Hill Climbing search. Is it admissible?
(basically a combination of Depth First Search and heuristics, older than A* where A * is the replacement) 1. Do DFS using a h-function to select one (best one) of the successors at the deepest leaf 2. If successors values are all worse than parent's values, then do backup (back the values up the tree) These are the two changes added to the DFS. It is NOT admissible. The Hill Climbing search tries to improve itself by looking at its successors. This is taking a local optimum versus the global optimum (the minimal path).
Describe Iterative Deepening. Is it admissible?
(blind-search) Combination of DFS + Breadth-First Search 1. Set Depth (D) = 1 2. Do DFS using depth bound D If solution is found, then stop 3. D = D + 1; go to 2; ID is admissible! When you find the solution, it's minimum. It's DFS because you back up and go again. It's a little inefficient because you have to recalculate after going up. The algorithm analysis shows the Big-Oh is only greater than DFS by a constant factor (which is dropped).
Give a state-space representation of the Missionaries and Cannibals problem.
(x,y,z) where x = # of missionaries on the left y = # of cannibals on the left z = L or R, position of the boat (3,3,L) → (0,0,R) 10 operators: movem: (x,y,L) → (x-1,y,R) if x ≥ 1 and (x=1 or x-1 = y) movec: movemm: movecc: movemc: returnm: (x,y,R) → (x+1,y,L) ... ... Solution path: 11 operators (minimal)
Give a state-space representation of the Tower of Hanoi problem with 3 disks.
(x,y,z) where x = post # of small disk y = post # of medium disk z = post # of large disk Initial = (1,1,1) Goal state =(3,3,3) movesmall(w): (x,y,z) → (w,y,z) movemedium(w): (x,y,z) → (x,w,z) if x ≠ y and x ≠ w movelarge(w): (x,y,z) → (x,y,w) if x ≠ z and y ≠ z and w ≠ x and w ≠ y Solution Path = 7 operators long
Describe Parametric Learning for games.
- Assume given form of h/e function + all features to be included (a linear form for h/e) = f(state) = w1x1(state) + ... + wnxn(state) xi = values of different features // given wi = correct weights on each feature // learn
Describe forward pruning.
- When each node has lots of possible successors, limit the number of successors you keep on each expansion - use the h/f to order the successors and keep the best (throw stupid nodes away) Keep the n best found nodes
Describe Structural Learning for games.
- assume we are given the x1, ..., xn features - we want the program to learn in the form of e/h - a signature table for e/h = n features are given x1...xn and xi can only take on discrete values The e/h values are what is learned. Use: take input values for x1...xn - look for the row (a1...an) - return the final column of this row We don't need to know the form of e/h here. The program will make adjustments to the last column. (A) We can calculate e/h from "book training" which is using a set of good examples. Use correlation coefficients on conflicts. (B) 1. For each node, it did an immediate evaluation and comparison with a backed up value 2. compare differences (immediate-backed up) 3. adjust e value in table to make this difference closer (using discrete regression)
Describe a breadth first search with problem-decomposition.
- keep open = queue (closed for expanded nodes) - node = problem description (IS, GS, and {operators} if operators change), parent, label = solved|futile|?, AND/OR bit for graph - We need two algorithms, solved() and futile() that update the labels of the parents when a label changes from unknown 1) put root on open; label = ?; 2) Pick top node on open (call it n); d(root) = 0; put n on closed 3 cases: n = primitive | deadend | otherwise n = primitive 3) Label n 'solved' → call solved() 4) if root = 'solved' then exit with answer 5) Delete from open all nodes whose ancestors were labeled 'solved' in step 3 6) Go to 2 n = deadend 3) label n 'futile' → call futile() 4) if root = 'futile' then exit with failure 5) Delete from open all nodes that were labeled 'futile' in step 3 6) Go to 2 n = otherwise 3) Expand n; put successors on open; set their back pointers to n; set label = ? [d(successor) = d(n) + 1 4) Go to 2 *Note: if branches are AND'd, both of the branches must be a solution since they will both execute. 5) is an optimization step since we found the minimal or equivalently minimal solution How do we know when a path is a deadend? We choose a D, depth bound, and d(n) ≥ D.
Regarding performance describe experimentation. What is one difficulty and one possible problem?
- test on a set of examples (one way of ranking/rating heuristic) -faster on a set than others (not always because of difference in machines, optimization, etc.) Problem: your program works really well on a set of examples, but may fail on others. It's called "cooked" examples, when the program is tuned to that set.
Describe a staged search. When can it be applied? Is it admissible?
- when space is exhausted: 1) Keep the best node(s) on open and keep the best path to each of these node(s) on closed 2) Delete rest of nodes 3) Start search up again Each of the above represent stages. One problem is that you delete the nodes, so maybe you redo a lot of nodes and searching (duplicates) and you won't know. Each state can produce duplicates. If your heuristic is bad, you may never find a solution. Admissible: No, even if each stage is admissible. Once you delete/throw away nodes, it becomes inadmissible.
What are the 8 search methods?
1) British Museum Search 2) Breadth First Search (BFS) 3) Depth First Search (DFS) 4) Best First (Heuristic) Search 5) A* Search 6) Hill Climbing 7) Iterative Deepening 8) Iterative Deepening + A* (IDA*)
What are the 8 areas of AI?
1) Certain games (Go, poker) 2) Common sense problem solving (Siri, Cortana) 3) Expert systems problem solving (medical diagnosis, mortgages) 4) Understanding natural language 5) Pattern recognition problems (visual and speech) 6) Deductive and Inductive Reasoning 7) Learning 8) Art
Define the sum cost f(x) of an AND subtree.
1. For tip node n: if n is primitive, f(n) = 0 if n is deadend, f(n) = ∞ otherwise, f(n) = h(n) 2. For interior node n with k AND successors The sigma sum from i to k [f(n_i) + cost(n,n_i)] So it's the cost of the current node plus the cost of all it's descendants. The cost() is going from n to n_i, its descendant. 3. Sum cost = f(root)
Define the max cost f(x) of an AND subtree.
1. For tip node n: if n is primitive, f(n) = 0 if n is deadend, f(n) = ∞ otherwise, f(n) = h(n) 2. For interior node n with k AND successors f(n) = max[f(n_i) + cost(n,n_i)] So the node in question takes the max cost of one of its descendants. 3. Max cost = f(root)
Describe four refinements you can make to the best-first search.
1. Forward Pruning - Use heuristic/evaluation to order successors after expansion, chop off the stupid cases + node: small values - node: large values - n-best forward pruning - tapered n-best forward pruning, n = function of depth 2. Waiting for a stable position (every search tries to do this) - always stop the search in a stable position = no potential big changes in values at the next move |~0-----------(-100 +lose)------------~0 (-lose)| Definition: Tip node = is past a minimum depth + stable position 3. Book Moves At the beginning of the game, you can lookup what to do for a set of standard patterns: = opening book There may also be an ending book. 4. Use phases of the game The evaluation function changes based on the phase. Divide the game into phases and design an e for the static evaluation or heuristic for the search for each phase.
Name and describe three search modifications for AND/OR graphs. (AO*)
1. Forward pruning of subproblems - throw any subproblem that has a large cost f() associated with it 2. Staged search - If out of memory, you can periodically keep the best potential subtree(s) from step 2, put them on open, and throw away the rest of the subtrees. 3. Expand a burst of nodes in AO* Instead of picking one tipnode in step 3, pick a burst (5-10) tipnodes on T before re-evaluating the best subtree.
Define when a node in an AND/OR subgraph is solved, assuming it's normalized.
1. it is a terminal node that represents a primitive problem 2. it is a nonterminal with OR subnodes and at least one successor is labeled solved 3. it is a nonterminal with AND successors and all successors are solved
Describe two approaches for handling n adversaries, n > 2.
1. paranoid approach: all out to get +, so evaluate nodes with one e, positive values are good for +, negative is bad, 0 is a neutral The main difference is that - isn't always good for the opponents. We're only looking at +. To backup values: for our position = take max for all the opponents = take min This means that opponents will sacrifice themselves to defeat you, which could be extreme. Thus the extension for a 4 player game would be mini-mini-mini-max. 2. everyone for themself: free-for-all! need n-evaluation functions, e1,e2,...en - e1, ..., en (one for each player) ei (node) = positive if good for player or negative if bad for player - Each node in the search will be labeled with all n-player values (k1,k2...kn) which is the value of e1 on the position, e2, etc. - For backup: If it's the ith player's turn, backup the n-tuple that has the largest value in the ith (which would be the best move for that position)
Describe Rote Learning for games.
4. a. From examples: - store IS+ solution path - store position+ and its best move (book moves is an example) b. From its own experiences: - after solving a problem, store IS+ solution - in game, after working out the best next move, store it (given position, move) so when it comes up next, skip the search and use stored data
Define a "deadend" in a graph.
A *deadend* is a non-primitive node that has no successors (can't be solved or broken down)
Describe penetrance (P). Explain one benefit and one problem.
A performance measure. - P = L/T = (Length of solution found)/(Total number of nodes generated, i.e. opened and closed) - People like it because it's easy to see and can be found while searching - P = 1 would be perfect, but the larger the better (always a fraction below 1). 1 means the problem is solved. - P doesn't depend on the language or software that you're using. One problem with penetrance is that it doesn't allow comparisons on different problems, since the exponential searches have significantly different nodes. It will always say the one with less nodes is better.
Define a solution in the state-space method.
A solution in state-space representation is a sequence of operators, op1, op2, ..., opk, such that (...(op2(op1(Initial state)))...) evaluates to a goal state.
Define a solution.
AND subgraph that labels the root as solution.
Describe a bidirectional search mode.
Attack from both the initial state and goal state until the nodes connect, thus a solution is found.
Describe effective branching factor. What's the biggest drawback?
B = average # of successors generated at each node (assuming a full tree of depth L where L is the length of the solution path and T is the number of nodes in the tree) A tree can have B's at each level running down the right side, (not counting the root) we can have B at level one, B2 at level two, etc. to BL at the level where the solution is found. T = [B(B^L -1)]/(B-1) The biggest drawback is that there's no explicit formula for calculating B in terms of T and other stuff. This is why we often use P because it's easy to calculate whereas B is very useful (and reasonably independent of the problem) but hard to define. Here with the branching factor, the smaller the better. If B=2, we have a binary tree. If B=3, we have a ternary tree. Ideally, 1 is best because we make the right choice every time.
Describe IDA*. Is it admissible?
Blind-search method (no heuristics/not intelligent) Combination of ID + A* (informed search) 1. Let Threshold (T) = f(IS) = h(IS) 2. Do DFS using T as the depth bound; If the goal state is found, then stop; 3. Set T = min(f(n)) where n is a dead end from step 2 Admissible? Yes, if f satisfies the H-N-R theorem, which is the heuristic is always less than the true goal path for every state.
Describe Problem-Decomposition Searches. How do they differ from state-space representation?
Break the problem into various sub-problems and solve those. The lowest level problems are called primitive, because they can be easily solved. This is different than State-space for 2 reasons: - Here, each node is a complete problem, whereas in state-space, each node is simply a state. - These are AND branches, which are also in order, not just OR branches as before. AND branches are denoted by connecting the branch lines with one another, OR branches are unconnected. You may have OR branches in your sub-problem graph. OR branches represent choices, and we usually assign variables to the different choice. OR-choices on how you break up the problem AND-sub-problems that must all be completed Definition: Problem-Decomposition Representation 1) Problem description of given problem = (state-space description: all of it) 2) Set of problem reduction operators Operators: problem description → {problem descriptions} // Note: {prob desc} are and'd together 3) Set of primitive-problems See normalization.
Describe the British Museum Search. Is it admissible? Is it systematic?
British Museum Search (random walk) Things are stored randomly, so you simply have to search until you find it. Expand a random node at each choice (not systematic) Not admissible. Not systematic.
Define: Admissible Search
Complete search + solution is always the minimal length *Complete Search = Given that a solution exists and enough time and space, the program will eventually halt with a solution
What are states in state-space representation?
Data structures (n-tuples)
Describe one approach to handling hidden information in a game.
Estimate with a probability function the chance of certain info being there, then model like chance.
Describe the Breadth First Search. Is it admissible? Is it systematic?
Expand to the oldest node at each point on queue BFS Algorithm (tree search) (AKA Uniform Cost Algorithm) a. Put IS on open queue b. If open = 0, then fail else select top element of open(=n) cd put n on closed c. Expand n; put successors in bottom of open and set their pointers to n d. Check if any successor is a goal If so, the exit with solution (= sequence of back plus in reverse order) else go to b. For graph in step c, ignore any duplicate successors (on open or closed) (Note, BFS is admissible. Even though it is a costly search (memory and time) if the solution path is large, it is popular for these reasons.) BFS is exponential, an where a = the number of operators (choices) and n is the input. It is systematic.
True or false: α-β search is more efficient than minimax at the cost of approximating the minimax solution.
False The α-β has been proven to give you the same moves as the MiniMax but more efficiently.
Compare and contrast BFS with DFS
For DFS, it is not complete if the depth is too large, therefore not admissible. It may not have the right minimum based on the depth bound. BFS is always complete and admissible.
How do we incorporate chance in games?
For each edge, multiply the probability times the resultant value (evaluation function) at that node.
Describe forward and backward search modes. What changes need to be made to an algorithm or what must it have?
Forward search is working from the initial state to the goal state. Backward search is working from the goal state to the initial state. 1. operators must be invertible 2. there must be a small number/unique goal state
Regarding searching, is it better to work from a large set to a small set or from a small set to a large set?
From a small set to a large set. We have more ways to hit our mark, so we're likely to find the best, minimal path.
Describe GPS. What does it attempt to do?
General Problem Solving (general purpose decomposition heuristic, AKA means-ends analysis) Algorithm designed by Newell, Shaw, Simon (Carnegie Mellon) Attempt to solve all state-space problems using similar heuristics. Input: Give state-space description Call GPS(initial state, goal state) Func GPS(X,Y) // recursive algorithm 1. Find differences between states X and Y and select the main difference (call it D) this step is sometimes called the greatest difference step. 2. For D, select the operator that reduces it the most (call it k = key operator) 3. If k can be applied to x, then reduce to the subproblem GPS(k(x), y) else generate two subproblems i. GPS(x,z) ii. GPS(k(z), y) Note: z is a state where you can apply k, and these are AND'd together.
Define: Complete Search
Given that a solution exists and enough time and space, the program will eventually halt with a solution
Use the state space representation to model chess.
Gotcha! It can be difficult or impossible to model games with an opponent using state space representation. In these cases, we use "adversarial search" techniques.
State the Hart-Nilsson-Raphael theorem. What search does it relate to? Is it a necessary condition, sufficient condition, or neither?
Hart-Nilsson-Raphael (1968) let h^(s) = real distance from S to a goal If 0 ≤ h(s) ≤ h^(s) for all states S and g > 0 for all operators then A* is admissible. The above is a sufficient condition, not a necessary one. In simpler terms, the heuristic must be a SHORTER depth than what the actual depth of the solution is. This way, we don't over shoot the minimal solution and search all possibilities.
Describe the optimal α/β search.
Maximal efficiency of α/β = optimal α/β the best node at each choice is always expanded first (thresholds = FV's at any node) (worst is = minimax) Theorem: Assume minimax gives T= ∑_(i=1)^DB^i where B is the average branching factor. Then optimal α/β will give T′=2 ∑_(i=1)^D B^(i/2) average branch factor √B Savings is usually between O and √B
Describe the MiniMax search. Where does the intelligence come from?
Minimax Search - Start with current position X+: 1. Generate the game tree down to some pre-determined limit (using BFS,DFS, AO*) how far? down to win+/loss+, or time, space, depth limit of deepest/shallowest node 2. After search terminates, apply a static evaluation function = e() e(n) = > 0 → good for + ~ 0 → equal position < 0 → good for - + or - ∞ means a win for that player. 3. The values are backed up the tree to top level: for OR (+ level): parent node takes max of successors for AND (- level): parent node takes min of successor 4. Select the move at top that gives max values Intelligence comes from 2 sources: - e-function: e(n) - number of levels searched
Is AI just modeling the brain/human behavior?
No, we're looking for what works to solve the problem. Airplanes : birds :: AI : brain
What are five ways AI's can learn in regards to games?
O. Representation reformulation Einstein style: a program that sees a state-space representation and realizes that a better representation exists (there's yet to be a really good program to do this, but there are attempts. Often involves "meta-operators" which typically makes a sequence of normal operators.) 4 Levels of improving e/h 1. Feature learning (heuristic learning) - discover new features or heuristics that should be included in e/h 2. Structured learning - discovering the best form of the e/h functions 3. Parametric learning (statistical learning) - program makes adjustments to the weights on the features in e/h 4. Rote learning - program recognizes what to do in specific cases
Describe the Depth First Search. Is it admissible?
Pick the newest node on the open stack to expand at each point D = depth of the tree DFS Algorithm a. Put IS on open stack; d(IS) = 0 b. If open = 0, then fail else pop the top node of open (call it n) and put in closed c. If d(n) ≥ D go to 2 else expand n and push successors on open; set back pts to n; d(successors) = d(n) + 1 d. If any successor is a goal state then step with successors else go to 2 Is DFS admissible? No if D is too shallow, not complete if D is too large, not admissible, because it may not have the right minimum based on the depth bound
Apply the state-space representation for: Water-Jug Problem Given: 4 and 3 gallon jug - both initially empty a water faucet Goal: Get exactly 2 gallons in a 4 gallon jug.
Representation: 1) State = (x,y) where x = amount in 4-gal jug, y = amount in 3-gal jug 2) IS = (0,0) 3) goal = (2,n) where n is anything 4) operators: fill4: (x,y) → (4,y) if x < 4 fill3: (x,y) → (x, 3) if y < 3 dump4: (x,y) → (0,y) dump3: (x,y) → (x, 0) pour3-4: (x,y) → (x+y, 0) if x+y ≤ 4, or (4, 4-(x+y)) if x+y > 4 pour4-3: (x,y) → (0,x+y) if x+y ≤ 3, or (x+y-3, 3) if x+y > 3 5) What is the solution path? There are more than one, but all the minimal will have 6 operators. (0,0) - fill3 → (0,3) - pour3-4 → (3,0) - fill3 → (3,3) -pour3-4 → (4,2) - dump4 → (0,2) - pour3-4 → (2,0) Path length is 6, because we count the operators. We can find the solution by doing an expansion of the possible states and doing a search.
Define solution in relation to adversarial search.
Solution = AND subgraphs that labels the root = win+ = winning strategy+ Conversely, if this isn't the case, there's no winning strategy to guarantee a win.
Apply the state-space representation for: Picture a monkey in a room (a) with a box (b) and bananas (c) hanging from the ceiling. The monkey would need to climb on top of the box to get the bananas.
State-space representation 1) (u,v,x,y) u = position of monkey v = position of box x = 1 if on box, 0 if on floor y = 1 if has bananas, 0 if not 2) IS = (a,b,0,0) 3) GS = (_,_,_,1) // _ is prolog for it doesn't matter what value this takes. 4) Operators move(z): a position in the room (u,v,x,y) → (z,v,x,y) if x = 0, i.e. the monkey is on the ground (we can also write this as (u,v,0,y) → (z,v,0,y) push(z): push the box to a new position (u,u,0,y) → (z,z,0,y) climb: (u,u,0,y) → (u,u,1,y) jump: (u,u,2,y) → (u,u,0,y) grasp: (c,c,1,0) → (c,c,1,1) letgo: (u,v,x,1) → (u,v,x,0) 5) Solution path: (a,b,0,0) - move(b) → (b,b,0,0) - push(c) → (c,c,0,0) - climb → (c,c,1,0) - grasp → (c,c,1,1) GS
What two search modifications canNOT be applied to AND/OR graphs? (AO*)
The backward or bidirectional search modifications.
Define the state-space representation of problems.
The minimum requirements are: 1) General description of the states 2) An initial state (IS) 3) A set of goal states (by description) 4) Set of legal operators + their effect on states + their constraints Additionally, these are typically applied: 5) Solution with the minimal number of operators used 6) Solution with minimal total cost (if operators have different costs)
Consider step 3 of the AO* search: 3. Select a tip node of T (call it n) and move n from open → closed One of three things happen: i. n is primitive 4. Label n solved and call Solved() on T 5. If root = solved then exit with T solution 6. Rewrite nodes from open whose ancestors were labeled solved in step 4 7. Go to 2 ii. n is a deadend 4. Label n solved and call Futile() on T 5. If root = Futile() then exit with failure/false 6. Rewrite nodes from open whose ancestors were labeled futile in step 4 7. Go to 2 iii. otherwise 4. Expand n; put successors on open with back pointers to n; label = ? 5. Compute f values of successors 6. Go to 2 Assume that there are many tip nodes with the same values, which do we expand first?
There's some game theory behind it. If T is a solution, then all tipnodes are to be expanded If T is not a solution, then get off T as soon as possible ∴ one tipnode is bad Which node should we pick? We don't know what T is right now. This is similar to the prisoner's dilemma. We should assume the second option. If we do the first, we'll have to expand all the nodes, but if we assume the second, we can stop with one bad node. Work the worst node first with the highest h() value. (Think like a lazy person).
Describe the adversarial search method. When is it used?
Used when modeling a 2 player game. Game Tree Representation - (2 person, perfect info, zero sum, w/I only) 1. States = (X+) or (X-) where X is configuration of the game + = own turn to go next - = opponent's turn 2. Initial State 3. Operators = legal moves 4. Description of winning position for + (= losing for -) and winning positions for - (= losing for +) It's very similar to the state-space representation but significantly different. We can have AND/OR trees, but we need two kinds of nodes. + moves will generate OR successors and - nodes will generate AND successors. Note that - generates + nodes and vice versa. Solution = win for + Futile = loss for +
How can you normalize an AND/OR tree?
We can normalize an AND/OR tree by adding dummy nodes. Nodes either have all AND subnodes or all OR subnodes We want the root to only have OR branches for the choices. If something is AND'd, it would have to then do all options. When there are AND branches connected to a root, we simply add a dummy node between the AND branches and the root.
Define futile in regards to graphs.
a node in an AND/OR tree is futile if at least one of the following holds: 1) it is a deadend 2) nodes with OR successors and all are labeled futile 3) nodes with AND successors and at least one is futile
What features does an artificial intelligence solution have for game playing?
a) Look at the position and decide the legal moves possible b) For each legal move, evaluate it c) Select the best move based on max (for you) or min (for predicting opponent
Define: Blind/Uninformed Search
choice does not depend on the problem algorithms that are only given the problem definition and no other information (no heuristics)
What are operators in state-space representation?
functions that transfer, state → state
Describe the AO* search for AND/OR trees. Is it admissible?
node = [problem description, parent, label = solved|futile|?, f() = informed part (evaluation, cost function, etc.), AND/OR bit] Three functions are needed: - solved() - futile() - cost() = { sumCost or maxCost using heuristic() } 1. Store start node S on open; calculate f(s); label = ? 2. Call Cost() and select the best (minimal) AND-subtree with S as the root (call it T) 3. Select a tip node of T (call it n) and move n from open → closed One of three things happen: i. n is primitive 4. Label n solved and call Solved() on T 5. If root = solved then exit with T solution 6. Rewrite nodes from open whose ancestors were labeled solved in step 4 7. Go to 2 ii. n is a deadend 4. Label n solved and call Futile() on T 5. If root = Futile() then exit with failure/false 6. Rewrite nodes from open whose ancestors were labeled futile in step 4 7. Go to 2 iii. otherwise 4. Expand n; put successors on open with back pointers to n; label = ? 5. Compute f values of successors 6. Go to 2 Admissible? Sometimes yes, sometimes no. It depends on the h function. Same theorem as for A*. The cost function, f(), must satisfy the two qualities of 1. 0 ≤ h(node) ≤ h^(node) for all nodes (must always underestimate depth of solution) 2. The cost of a branch in the tree must be greater than 0 for all branches If these are satisfied, then AO* is admissible (sufficient conditions).
Describe the Best-First Search. Is it admissible?
node = state, parent, heuristic value Algorithm (tree): 1. Put start state m open; calculate h(IS); 2. If open = 0, then fail Else remove from open with smallest h-value (call it n) and put it on closed 3. Expand n; set back pointers of successors to in; calculate h-values; put them on open; 4. If any successor is a goal then succeed Else go to 2. Graph: If a successor is a duplicate, don't store it. It is NOT admissible. It halts when a goal is found.
Describe the A* Search. Is it admissible?
node = state, parent, heuristic(), g() Very similar to the Best-First search, however the evaluation function takes into account the cost as well, g(). Evaluation function f(state) = h(state) + g(state) where h() is the heuristic function and g() is the cost function to reach one state from another. If the cost is uniformly 1, this is the depth. The search is sometimes admissible. This is dependent on the heuristic function. The heuristic must be a SHORTER depth than what the actual depth of the solution is. This way, we don't over shoot the minimal solution and search all possibilities.
Define: Heuristic
specialized knowledge about a problem that mirrors the efficiency of the search (tricks, tips, "rules-of-thumb," common sense short cuts) Heuristic function: h(state) = number ≥ 0 h(goal state) = 0 ... (smaller the value the better) ... h(dead end) = ∞ /* Number = rating. Note: a blind-search doesn't have any knowledge about the problem where here we have goal states, ratings, etc. */
Describe the α-β search. What other search is it built on?
α-β : Optimized minimax Developed in 1958 by Newell, Shaw, and Simon It prunes bad searches. No one really uses minimax anymore, they all use α-β. Chess (deepblue) is done using α-β with back pruning. (Forward pruning may throw away an answer.) Idea: Evaluate tip nodes as soon as generated. Use that tip node evaluation to give a threshold (preliminary) to next level up. α = threshold at a + node (lower bound at this point) β = threshold at a - node (upper bound on final value) 2 rules for cutoffs 1. Search can be discontinued below a - node if its β-threshold ≤ α-threshold if any highest node = α-cutoff 2. Search can be discontinued below a + node if its α-threshold ≥ β-threshold if any higher - node = β-cutoffs α-β Search Algorithm (DFS, ID, AO*, IDA0+, but not BFS!) 1. Search until you generate a pre-determined tip node 2. Evaluate the tip node and give it a final value (FV) from the e-function 3. Back up FV's up the tree as far as possible using minimax idea (minimum of AND's and maximum of OR subtrees) If root has FV, then stop. 4. Advance/update a threshold value to next level up (only one level) 5. Check for α/β cutoff using 2 rules: if you find one, make that cutoff threshold value a final value and go to 3 else go to 1 The α-β has been proven to give you the same moves as the MiniMax but more efficiently.