CS540 Midterm

¡Supera tus tareas y exámenes ahora con Quizwiz!

[Uninformed Search] Closed world assumption

(Fully observable assumption) means that all necessary information about a problem domain is accessible so that each state is a complete description of the world; there is no missing information at any point in time

[Game Playing] Quiescence search

(To fix horizon effect) -When SBE value is frequently changing, looking deeper than the depth-limit -Look for point when game "quiets down" -ie always expand any forced sequences

[Supervised Learning: K-Nearest-Neighbors and Decision Trees] Setting parameters

-Most learning algorithms require setting various parameters -They must be set without looking at the Test data -Common approach: use a Tuning Set

[Uninformed Search] Depth-first search

-Stack (LIFO) used for the Frontier -Remove from front, add to front Expand the deepest node first 1. Select a direction, go deep to the end 2. Slightly change the end 3. Slighly change the end some more... Use a stack to order nodes on the Frontier May not terminate without a depth bound Not complete -With or without cycle detection -And, with or without a depth cutoff Not optimal Can find long solutions quickly if lucky Time complexity: O(b^d) Space complexity: O(bd)

[Supervised Learning: K-Nearest-Neighbors and Decision Trees] k-fold cross validation

1. Divide all examples into K disjoint subsets E = E1,...,Ek 2. For each i = 1...K -let TEST set = Ei and TRAIN set = E - Ei -Build decision tree using TRAIN set -Determine accuracy Acci using TEST set 3. Compute K-Fold Cross Validation estimate of performance = (Acc1 +...+ Acck)/K

[Local Search] Hill-climbing algorithm

1. Pick initial state s 2. Pick t in neighbors(s) with the largest f(t) If f(t) <= f(s) stop and return s 4. s = t, go to step 2 Simple Greedy Stops at a LOCAL maximum HC exploits the neighborhood: like Greedy Best-First Search, it chooses what looks best locally, and doesn't allow backtracking or jumping to an alternative path since there is no Frontier list Space efficient: similar to beam search with a beam width of 1 (Frontier size of 1) HC is very fast and often effective in practice Like climbing in fog

[Local Search] Stochastic hill-climbing (simulated annealing) algorithm

1. Pick initial state, s 2. Randomly pick state t from neighbors of s 3. if f(t) better than f(s), then s = t. else with small probability s = t 4. Go to step 2 until some stopping criterion is met Pick initial state, s k = 0 while k < kmax { T = temperature(k) Randomly pick state from neighbors of s if f(t) > f(s) then s = t else if p > random() then s = t k = k + 1 } return s

[Game Playing] Static evaluation function

A Static Board Evaluation (SBE) function is used to estimate how good the current board configuration is for the computer -It reflects the computer's chances of winning from that node -It must be easy to calculate from a board configuration Typically, one subtracts how good it is for the opponent from how good it is for the computer If the SBE gives X for a player, then it gives -X for the opponent SBE should agree with the Utility function when calculated at terminal nodes

[Informed Search] Consistent heuristic

A heuristic is consistent (aka monotonic) if, for every node n and every successor n' of n, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n' plus the estimated cost of reaching the goal from n': c(n, n') ≥ h(n) - h(n'), or equivalently h(n) ≤ c(n, n') + h(n') Triangle inequality for heuristics Implies values of f along any path are nondecreasing When a node is expanded by A*, the optimal path to that node has been found Consistency is a STRONGER condition than admissibility

[Uninformed Search] Solution path

A sequence of actions associated with a path in the state space from a start to a goal node The cost of a solution path is the sum of the arc costs on the solution path

[Unsupervised Learning: Hierarchical Agglomerative Clustering and K-Means Clustering] Training set

A training set is a collection of independent and identically distributed (sampled independently from the same unknown distribution) examples x1,...,xn which is the input to the learning process A training set is the "experience" given to a learning algorithm What the algorithm can learn from it varies

[Game Playing] Cutoff

Alpha Cutoff: -At each MIN node, keep track of the minimum value returned so far from its visited children -Store this value as v -Each time v is updated (at a MIN node), check its value against the α value of all its MAX node ancestors -If v ≤ α for some MAX node ancestor, don't visit any more of the current MIN node's children Beta Cutoff: -At each MAX node, keep track of the maximum value returned so far from its visited children -Store this value as v -Each time v is updated (at a MAX node), check its value against the β value of all its MIN node ancestors -If v ≥ β for some MIN node ancestor, don't visit any more of the current MAX node's children

[Unsupervised Learning: Hierarchical Agglomerative Clustering and K-Means Clustering] Unsupervised learning problem

An example or instance, x, represents a specific object x often represented by a D-dimensional feature vector x = (x1...xD) ε R^D Continuous or discrete x is a point inf the D-dimensional feature space Abstraction of object. Ignores all other aspects (e.g two people having the same weight and height may be considered identical)

[Local Search] Operators

An operator/action is needed to transform one solution to another

[Game Playing] Minimax principle

Assume both players play optimally -Assuming there are two moves until the terminal states, -High utility values favor the computer, computer should choose maximizing moves -Low utility values favor the opponent, smart opponent chooses minimizing moves The computer assumes after it moves the opponent will choose the minimizing move The computer chooses the best move considering both its move and the opponent's optimal move

[Unsupervised Learning: Hierarchichal Agglomerative Clustering and K-Means Clustering] Dendrogram

Binary tree that results from HAC

[Game Playing] Branching factor

Branching factor: b possible moves at each step

[Uninformed Search] Bidirectional search

Breadth-first search from both start and goal Stop when frontiers meet Generates O(b^(d/2)) nodes Complete Optimal if (BFS reasons) Time and space O(b^(d/2))

[Unsupervised Learning: Hierarchical Agglomerative Clustering and K-Means Clustering] Hierarchical agglomerative clustering algorithm

Build a binary tree over the dataset by repeatedly merging clusters Measure distance with Euclidean algorithm (Cheat sheet)

[Game Playing] Alpha-beta pruning algorithm

CHEAT SHEET

[Unsupervised Learning: Hierarchichal Agglomerative Clustering and K-Means Clustering] k-means clustering algorithm

CHEAT SHEET

[Game Playing] Monte Carlo tree search

Concentrate search on most promising moves Best-first search based on random sampling of search space Monte Carlo methods are a broad class of algorithms that rely on repeated random sampling to obtain numerical results. They can be used to solve problems having a probabilistic interpretation For each possible legal move of current player, simulate k random games by selecting moves at random for both players until game over (called playouts); count how many were wins out of each k playouts; move with most wins is selected Stochastic simulation of game Game must have finite number of possible moves, and game length is finite

[Supervised Learning: K-Nearest-Neighbors and Decision Trees] bagging ensemble learning

Create classifiers using different training sets, where each training set is created by bootstrapping, ie, drawing examples (with replacement) from all possible training examples Given N training examples, generate separate training sets by choosing n examples with replacement from all N examples -Called taking a bootstrap sample or randomizing the training set -Construct a classifier using then examples in current training set -Repeat for multiple classifiers

[Uninformed Search] Chronological backtracking

DFS performs "chronological backtracking"-when search hits a dead end, backs up one level at a time-problematic if the mistake occurs because of a bad action choice near the top of search tree

[Game Playing] Iterative-deepening with alpha-beta

Dealing with Limited Time -Run alpha-beta search with DFS and an increasing depth-limit -When time runs out, use the solution found for the last completed alpha-beta search (ie, the deepest search that was completed) -"anytime algorithm"

[Game Playing] Ply

Depth

[Game Playing] Deterministic vs. stochastic games

Deterministic: No coin flips, die rolls—no chance Stochastic: Chance is involved

[Unsupervised Learning: Hierarchical Agglomerative Clustering and K-Means Clustering] Class

Discrete labels: classes-M,F or A,J will often encode as 0,1 or -1,1-Multiple classes 1,2,3,...,C. no class order implied Continuous label: e.g blood pressure'

[Unsupervised Learning: Hierarchical Agglomerative Clustering and K-Means Clustering] Feature space

Each example can be interpreted as a point in a D-dimensional feature space, where D is the number of features/attributes

[Uninformed Search] Partial solution path

Each node implicitly represents a partial solution path (and its cost) from the start node to the given node. From this node there may be many possible paths that have this partial path as a prefix, and many possible solutions

[Game Playing] Perfect information games

Each player can see the complete game state. No simultaneous decisions.

[Game Playing] Best case and worst case of alpha-beta vs. minimax

Effectiveness (ie amount of pruning) depends on the order in which successors are examined Worst Case: -Ordered so that no pruning takes place -No improvement over exhaustive search Best Case: -Each player's best move is visited first In practice, performance is much closer to best case In practice we often get O(b^(d/2)) rather than O(b^d), -ex. Deep Blue went from b ~ 35 to b ~ 6, visiting 1 billionth the number of nodes visited by the Minimax algorithm -Permits much deeper search with the same tree -Makes computer chess competitive with humans

[Local Search] Local search problem formulation

Every node is a solution -Operators/actions go from one solution to another -Can stop at any time and have a valid solution -Goal of search is to find a better/best solution

[Unsupervised Learning: Hierarchical Agglomerative Clustering and K-Means Clustering] Labels

Examples: predict gender from weight, height, or predict adult/juvenile from weight, height A label y is the desired prediction for an instance x

[Unsupervised Learning: Hierarchical Agglomerative Clustering and K-Means Clustering] Feature

Feature Vector Representation Preprocess raw data - extract a feature (attribute) vector x, that describes all attributes relevant for an object Each x is a list of (attribute, value) pairs -Number of attributes is fixed -Number of possible values for each attribute is fixed if discrete Types of Features Numerical feature has discrete or continuous values that are measurements Categorical feature is one that has two or more values, but there is no intrinsic ordering of the values Ordinal feature is similar to a categorical feature but there is a clear ordering of the values

[Game Playing] Minimax algorithm

For each move by the computer: 1. Perform depth-first search, stopping at terminal states 2. Evaluate each terminal state 3. Propagate upwards the minimax values: If opponent's move, propagate up minimum value of its children If computer's move, propagate up maximum value of its children 4. Choose move at root with the maximum of the minimax values of its children Space complexity: DFS, so O(bd) Time complexity: Branching factor b, so O(b^d) Time complexity is a major problem since computer typically only has a limited amount of time to make a move

[Supervised Learning: K-Nearest-Neighbors and Decision Trees] random forests

For each tree, 1. Choose a training set by choosing n times with replacement from all N available training examples 2. At each node of decision tree during construction, choose a random subset of m attributes from the total number, M, of possible attributes (m << M) 3. Select best attribute at node using max-gain No tree pruning Doesn't overfit

[Supervised Learning: K-Nearest-Neighbors and Decision Trees] leave-one-out cross validation

For i=1 to N (number of examples) do: 1. Let (xi,yi) be the ith example 2. Remove (xi,yi) from the dataset 3. Train on the remaining N-1 examples 4. Compute accuracy on ith example Accuracy = mean accuracy on all N runs Doesn't waste data but is expensive Use for small datasets < ~100

[Uninformed Search] State-space graph search formulation

G = (V,E) Each node is a data structure that contains: -A state description -Other information such as link to parent node, name of action that generated this node, other bookkeeping data Each arc corresponds to one of the finite number of actions: -When the action is applied to the state associated with the arc's source node -The resulting state is the state associated with the arc's destination node Each arc has a fixed, positive cost corresponding to the cost of the action Each node has a finite set of successor nodes that corresponds to all of the legal actions that can be applied at the source node's state Nodes can be start nodes, goal tests determine if they're goal nodes

[Unsupervised Learning: Hierarchical Agglomerative Clustering and K-Means Clustering] Inductive learning problem

Generalize from a given set of training examples so that accurate predictions can be made about future examples Learn unknown function f(x) = y x: an input example (instance) y: the desired output, discrete or continuous scalar value h: hypothesis, function is learned that approximates f

[Uninformed Search] Expanding a node

Generate all of the successor nodes, add them and their associated arcs to the state-space search tree

[Unsupervised Learning: Hierarchical Agglomerative Clustering and K-Means Clustering] Classification problems

Given a training set of positive and negative examples of a concept, construct a description that accurately classifies whether future examples are positive or negative

[Uninformed Search] Operators

Given an action (aka operator), and a description of the current state of the world, action completely specifies: If that action can be applied What the exact state of the world will be after the action is performed in the current state Atomic, discrete and indivisible, therefore instantaneous

[Informed Search] Devising heuristics

Heuristics are often defined by relaxing the problem, ie computing the exact cost of a solution to a simplified version of the problem Ideally, we want an admissible heuristic that is as close to the actual cost without going over, also it must be fast to computer Trade off: Use more time to compute a complex heuristic versus use more time to expand more nodes with a simpler heuristic A* often suffers because it cannot venture down a single path unless it is almost continuously having success (ie h is decreasing); any failure to decrease h will almost immediately cause the search to switch to another path

[Uninformed Search] Completeness

If a solution exists, will it be found? A complete algorithm will find a solution (not all)

[Uninformed Search] Optimality/Admissibility

If a solution is found, is it guaranteed to be optimal? An admissible algorithm will find a solution with minimum cost

[Uninformed Search] Detecting repeated states, explored/closed set

If state space is NOT a tree, we have to remember already-expanded states too (called Explored aka Closed set) When we pick a node from Frontier -Remove it from Frontier -Add it to Explored -Expand node, generating all successors -For each successor, child --If child is in Explored or Frontier, throw child away --Otherwise, add it to Frontier

[Supervised Learning: K-Nearest-Neighbors and Decision Trees] Overfitting problem

In general, overfitting means finding "meaningless" regularity in training data Especially a problem with "noisy" data -Class associated with example is wrong -Attribute values are incorrect because of errors preprocessing the data Irrelevant attributes

[Unsupervised Learning: Hierarchical Agglomerative Clustering and K-Means Clustering] Inductive bias

Inductive learning is inherently conjectural process Inductive inference is falsity preserving, not truth preserving Learning can be viewed as searching the hypothesis space H of possible h functions Inductive Bias -Used when one h is chosen over another -Is needed to generalize beyond the specific training examples Biases commonly used in ML: -Restricted Hypothesis Space Bias: Allow only certain types of h's, not arbitrary ones -Preference Bias: Define a metric for comparing h's so as to determine whether one is better than another

[Local Search] Boltzman's equation

Let performance change ∆E = f(newNode) - f(currentNode) < 0 Always accept an ascending step, ∆E ≥ 0 Descending step accepted only if it passes the following test (Boltzman's equation): p = e^(∆E/T) Idea: p decreases as neighbor gets worse As ∆E → -∞, p → 0 ie as move gets worse, probability of taking it decreases exponentially As T→0, p→0 ie as "temperature" T decreases, probability of taking bad move decreases ∆E << T If badness of move is small compared to T, move is likely to be accepted ∆E >> T If badness of move is large compared to T, move is unlikely to be accepted

[Supervised Learning: K-Nearest-Neighbors and Decision Trees] Max-gain

Max information gain corresponds to best attribute/threshold pair

[Game Playing] Zero-sum games

One player's gain is the other player's loss. Does not mean fair.

[Unsupervised Learning: Hierarchichal Agglomerative Clustering and K-Means Clustering] Cluster center

Part of k-means clustering algorithm where you recalculate it a lot First randomly decided, then it becomes the mean of all points in the cluster

[Supervised Learning: K-Nearest-Neighbors and Decision Trees] Pruning

Pruning with a Tuning Set 1. Randomly split the training data into TRAIN and TUNE, say 70% and 30% 2. Build a full tree using only the TRAIN set 3. Prune the tree using the TUNE set (cheat sheet hw problem)

[Uninformed Search] Breadth-first search

Queue (FIFO) used for the Frontier Remove from front, add to back Expand the shallowest node first: 1. Examine states one step away from the initial states 2. Examine states two steps away from the initial states 3. and so on Complete Optimal if all operators have same constant cost, or costs are positive, non-decreasing with depth. Otherwise not optimal but does guarantee finding solution of shortest length (fewest arcs) Time and space complexity O(b^d), d is the depth of the solution, b is the branching factor at each non-leaf node A complete search tree has (b^(d+1) - 1)/(b-1) nodes

[Game Playing] Game playing as search

Representing board games as a search problem: -States: board configurations -Actions: legal moves -Initial State: Starting board configuration -Goal state: Terminal board configuration

[Uninformed Search] Iterative-deepening search

Requires modification to DFS search algorithm: -Do DFS to depth 1 and treat all children of the start node as leaves -If no solution is found, do DFS to depth 2 -Repeat by increasing "depth bound" until a solution is found Start node is at depth 0 Has advantages of BFS -Completeness -Optimality as stated for BFS Has advantages of DFS -Limited space -In practice, even with redundant effort it still finds longest paths more quickly than BFS Space complexity O(bd) Time complexity a little worse than BFS or DFS because nodes near the top of the search tree are generated multiple times (redudant effort) Time complexity O(b^d) Trades a little time for a huge reduction in space -lets you do BFS with (more space efficient) DFS Anytime algorithm: Good for response-time critical applications like games

[Uninformed Search] Search tree

Search process constructs a tree: -root is the start state -leaf nodes are unexpanded nodes (in Frontier list), "dead ends" (nodes that aren't goals and have no sucessors since no operators were applicable), goal node is last leaf node found Loops in graph may cause search tree to be infinite even if state space is small

[Local Search] Hill-climbing with random restarts

Solution found by HC is totally determined by the starting point, can get stuck at local maximum, plateaus, ridges...global max may not be found Very simple modification: 1. When stuck, pick a random new starting state and re-run hill climbing from there 2. Repeat this k time 3. Return the best of the k local optima found Can be very effective Should be tried whenever hill-climbing is used Fast, easy to implement; works well for many applications where the solution space surface is not too "bumpy" (ie, not too many local maxima

[Game Playing] Representing non-deterministic games

Some games involve chance How can we handle games with random elements? Modify the game search tree to include CHANCE NODES: 1. Computer moves 2. Chance nodes (representing random events) 3. Opponent moves

[Game Playing] Alpha-beta pruning

Some of the branches of the game tree won't be taken if playing against an intelligent opponent Pruning can be used to ignore some branches While doing DFS of game tree, keep track of: -At maximizing levels: Highest SBE value, v, seen so far in subtree below each node Lower bound on node's final minimax value -At minimizing levels: Lowest SBE value, v, seen so far in subtree below each node Upper bound on node's final minimax value Also keep track of: α = best already explored option along the path to the root for MAX, including the current node β = best already explored option along the path to the root for MIN, including the current node

[Game Playing] Horizon effect

Sometimes disaster lurks just beyond the search depth (ie computer captures queen, but a few moves later the opponent checkmates) The computer has a limited horizon, it cannot see that this significant event could happen

[Informed Search] Best-first search, evaluation function

Sort nodes in the frontier list by increasing values of an evaluation function f(n) that incorporates domain-specific information Generic way of referring to the class of informed-search methods

[Unsupervised Learning: Hierarchichal Agglomerative Clustering and K-Means Clustering] Distortion cluster quality

Sum of squared distance from each point xi to its cluster Ck (i=i...n)∑(xi - Ck)^2 Smaller value corresponds to tighter clusters Other metrics can also be used

[Local Search] Cooling schedule

T, the annealing "temperature," is the parameter that controls the probability of taking bad steps We gradually reduce the temperature, T(k) At each temperature, the search is allowed to proceed for a certain number of steps, L(k) The choice of parameters {T(k),L(k)} is called the cooling schedule

[Unsupervised Learning: Hierarchichal Agglomerative Clustering and K-Means Clustering] Average linkage

The average distance between all pairs of members, one from each cluster

[Uninformed Search] Frontier/open list

The generated, but not yet expanded states define the Frontier (aka Open) set. The difference between search strategies: Which state in the Frontier to expand next?

[Unsupervised Learning: Hierarchichal Agglomerative Clustering and K-Means Clustering] Complete linkage

The largest distance from any member of one cluster to any member of the other cluster

[Game Playing] Search tree

The new aspect to the search problem is that there is an opponent we cannot control

[Unsupervised Learning: Hierarchichal Agglomerative Clustering and K-Means Clustering] Single linkage

The shortest distance from any member of one cluster to any member of the other cluster

[Local Search] Neighborhood/Move Set

The solutions that can be reached with one application of an operator are in the current solution's neighborhood (aka move set) Local search considers next only those solutions in the neighborhood The neighborhood should be much smaller than the size of the search space (otherwise the search degenerates)

[Uninformed Search] Time and space complexity

Time Complexity: How long does it take to find a solution? Usually measured for worst case. Measured by counting number of nodes expanded, including goal node if found. Space Complexity: How much space is used by the algorithm? Measured in terms of the maximum size of the Frontier during the search

[Supervised Learning: K-Nearest-Neighbors and Decision Trees] Ockham's Razor

Type of Preference Bias The simplest hypothesis that is consistent with all observations is most likely The smallest decision tree that correctly classifies all of the training examples is best

[Uninformed Search] Uniform-cost search

Use a "Priority Queue" to order nodes on the Frontier list, sorted by path cost Let g(n) = cost of path from start node s to current node n Sort nodes by increasing value of g Dijkstra's Algorithm Complete Optimal -Requires that the goal test is done when a node is REMOVED from the Frontier rather than when the node is generated by its parent node Time and space complexity O(b^d)

[Supervised Learning: K-Nearest-Neighbors and Decision Trees] Tuning set

Use a Tuning Set for setting parameters: 1. Partition the given examples into Train, Tune, and Test sets 2. For each candidate parameter value, generate a decision tree using the TRAIN set 3. Use the TUNE set to evaluate error rates and determine which parameter value is best 4. Compute the final decision tree using the selected parameter values and both Train and Tune sets 5. Use Test to compute performance accuracy

[Informed Search] Algorithm A

Use as an evaluation function f(n) = g(n) + h(n), where g(n) is minimum cost path from start to current node n (as defined in UCS) The g term adds a "breadth-first-component" to the evaluation function Nodes in Frontier are ranked by the estimated cost of a solution, where g(n) is the cost from the start node to node n, and h(n) is the estimated cost from node n to a goal Not Optimal

[Informed Search] Greedy best-first search

Use as an evaluation function, f(n) = h(n), sorting nodes in the Frontier by increasing values of f Selects the node to expand that is believed to be closest (ie smallest f value) to a goal node Not complete Not optimal

[Informed Search] Algorithm A*

Use the same evaluation function used by Algorithm A, except add the constraint that for all nodes n in the search space, h(n) ≤ h*(n), where h*(n) is the actual cost of the minimum cost path from n to a goal The cost to the nearest goal is never over-estimated Complete Optimal A* should terminate only when a goal is REMOVED from the priority queue, same rule as for UCS, A* with h(n) = 0 is UCS One more complication: A* might revisit a state (in Frontier or Explored) and discover a better path. Solution: Put D back in the priority queue, using the smaller g value (and path) Can use lots of memory: O(number of states). For really big search spaces, A* will run out of memory. Solution: Iterative-Deepening A* (IDA*)

[Informed Search] Beam search

Uses an evaluation function f(n) = h(n) as in Greedy Best-First search, and restrict the maximum size of the Frontier to a constant, l Only keep the k best nodes as candidates for expansion, and throw away the rest More space efficient than Greedy Best-First Search, but may throw away a node on a solution path Not complete Not optimal

[Game Playing] Chance nodes

Weight score by the probability that move occurs Use expected value for move: Instead of using max or min, compute the average, weighting by the probabilities of each child

[Informed Search] Admissible heuristic

When h(n) ≤ h*(n) holds true for all n, h is called an admissible heuristic function An admissible heuristic guarantees that a node on the optimal path cannot look so bad that it is never considered ex. of Admissible heuristics: h(n) = h*(n) h(n) = min(2, h*(n)) ex. of not Admissible heuristics h(n) = max (2,h*(n)) h(n) = h*(n) - 2, Possible negative h(n) = √h*(n), If h*(n) < 1

[Supervised Learning: K-Nearest-Neighbors and Decision Trees] Decision tree algorithm

cheat sheet

[Supervised Learning: K-Nearest-Neighbors and Decision Trees] Information gain

cheat sheet

[Supervised Learning: K-Nearest-Neighbors and Decision Trees] K-Nearest-Neighbor algorithm

cheat sheet

[Game Playing] Expectiminimax value

expectiminimax(n)= SBE(n) for n, a Terminal state or state at cutoff depth max expectiminimax(s) for n, a max node min expectiminimax(s) for n, a min node sum of P(s) * expectiminimax(s) for n, a Chance node s is successors of n

[Informed Search] Heuristic function

h(n) -Uses domain-specific information in some way -Is easily computable from the current state description -Estimates the "goodness" of node n, how close node n is to a goal, the cost of minimal cost path from node n to a goal state ≥ 0 for all nodes n close to 0 means we think n is close to a goal state

[Informed Search] Better informed heuristic

h(n) = h*(n): ONLY nodes on optimal solution path are expanded, no unecessary work is performed h(n) = 0: The heuristic is admissible, A* performs UCS The closer h is to h*, the fewer extra nodes will be expanded. If h1(n) ≤ h2(n) ≤ h*(n) for all n, then h2 dominates h1, it's a better heuristic since h1 expands at least as many if not more nodes than h2, A* with h2 is said to be better informed


Conjuntos de estudio relacionados

Nursing Exam 2: Stress & Coping, OCD, Phobias, PTSD, Grief & Loss

View Set

Lesson 11: The Onset of the Cold War

View Set

Treatment Modalities for Chronic Neurologic Disorders

View Set

Developmental Psychology Test 2 Practice

View Set

15th Edition: Chapter 65: Management of Patients with Oncologic or Degenera-tive Neurologic Disorders

View Set

Chapter 15: Real Estate Market and Analysis

View Set

Ch. 9 - Advanced Inventory Management

View Set

PADI, Open Water Diver, Final Exam Review

View Set