AI Exam 1

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

pros of behavior trees

-basic components reusable -AI can be goal-directed and autonomous -AI can respond to events -knowledge base is easy to read, maintain, and debug

pros of genetic algorithms

-fast, low memory usage -can explore very large search spaces -easy to design

cons of behavior trees

-hard to produce a minimal set of tasks, conditions, decorators -slow to react to changes in strategy -not a full search in the search space

learning agent

-learning element: responsible for making improvements -performance element: responsible for selecting actions -critic: gives feedback to the learning element on how the agent is doing and determines how the performance element should be modified to do better in the future -problem generator: responsible for suggesting actions that will lead to new and informative experiences

cons of genetic algorithms

-randomized--not optimal or complete -can get stuck on local maxima -can be hard to design a chromosome (a hypothesis)

constraints

-specify allowable combinations of values -can be explicit (explicitly enumerated) or implicit (formula) -can be unary, binary, global, alldiff

genetic algorithms steps

1. start with random population 2. apply fitness function to recognize fittest individuals to implement a natural selection process 3. keep n hypotheses at each step that have a high value of the fitness function 4. possibly cull the least fit individuals and remove them 5. apply successor operation(s) to generate a new population (mutation fringe operation--given a candidate, return a slightly different one; crossover fringe operation--given two candidates, produce one that has elements of each) 6. apply the solution test to the best candidate 7. repeat

arc consistency

a form of constraint propagation that tries to prune illegal assignments before they happen; runs inside backtracking search

conflict sets

a stack that tracks the latest chosen conflicting assignment

factored representation

a state consists of a vector of attribute values; allows uncertainty to be represented

structured representation

a state includes objects, each of which can have attributes of its own and relationships to other objects

atomic representation

a state is a black box with no internal representation

uniform-cost search

a type of uninformed search -node expansion: expand the cheapest node first -implementation: use a priority queue, where the priority is the cumulative cost of a node -completeness: yes, if the cost of every step exceeds some small constant -optimality: yes, finds the single best path between the start node and the goal -complexity: see notes

minimum remaining values (MRV)

a way to prune illegal assignments; choose the variable with the fewest legal values in its domain

degree heuristics

a way to prune illegal assignments; choose the variable with the highest number of constraints

search tree

a what-if tree of plans and all their possible outcomes, where the root is the start state, children correspond to successors, and nodes show states, but correspond to plans to achieve those states (each node in the search tree is an entire path in the state space graph)

local search algorithms

aim to solve CSPs more efficiently; find a solution to problems whose search space is large or infinite

percept

an agent's perceptual inputs at any given moment

reactive planning

class of algorithms for action selection; differs from classical planning algorithms in that they are time-bound and compute only one action in every instant, based on the current context

hypergraph

constraints represented with a graph that shows the relationships between variables

unplanning agents

don't consider future consequences of their actions, see the world as a single percept; choose actions based only on the current percept

conflict-directed backjumping

each variable has a conflict set; to find a new assignment after a conflict, the conflict set migrates from one variable to another until there is a value available to try

heuristic function

estimates the cost of the cheapest path from the state at node n to the goal state; based on external knowledge h(n) = 0, if n is the goal manhattan distance, euclidean distance, chebyshev distance, unitary distance

cutset conditioning

find variables that transform the constraint graph into a tree

search problem

five components: -initial state -possible actions -successor function (transition model) -goal test: determines the end of the search -path cost function: assigns a numeric cost to paths

consistent

for every node n and every successor n' generated by any action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n' + the estimated cost of reaching the goal from n'

uninformed (blind) search

has no additional information about the goal location

model-based goal-based agents

have a goal that describes situations that are desirable

model-based utility-based agents

have a utility function that describes how well a goal has been or can be achieved; chooses the action that maximizes the expected utility of the action outcomes

greedy best-first search

informed search -heuristic function: distance of the state to the goal -node expansion: expand the node closest to the goal first; f(n)=min h(c) -implementation: use priority queue where the priority is f(n) -completeness: no, can get stuck in infinite loops -optimal: no -complexity: O(b^m), where b is the branching factor and m is the maximum depth

A* search

informed search -total function: cost of the path to reach the node (backward cost) + cost of get from the node to the goal (forward cost, the heuristic function h(n)) -node expansion: expand the node with the lowest total cost firstl i.e. f(n) = min g(n)+h(n), which is the estimated cost of the cheapest solution through node n -implementation: priority queue, where priority is f(n) -completeness: yes, if the cost for each step is positive -optimality: yes, if the heuristic function is admissible and consistent

backjumping

jump back to the most recent conflict using the idea of conflict sets

forward checking

keeps track of the domains for the unassigned variables and removes bad options right away as a type of filtering for CSPs

Hill climbing algorithm

local search technique, greedy; at each step, current node is replaced by the best neighbor; easily gets stuck in local maxima, ridges, plateau

model-based reflex agents

maintains an internal state that depends on percept history to keep track of the part of the world it can't see now

state space graph

mathematical representation of the states of a search problem in which nodes are states and edges are actions, and each state occurs only once

planning agents

must have a model of how the world evolves in response to actions; choose actions based on hypothetical consequences of the actions

least constraining value (LCV)

once the variable is select, choose the value that rules out the least choices for the neighbors

reflex agents

select actions based on current percepts, ignoring history

simulated annealing

select random successors with decreasing probability (called temperature)--if T decreases slowly enough, the algorithm converges; calculated a gradient deltaE--if deltaE > 0, the new state is accepted immediately as an improvement, if deltaE < 0, the new state is accepted only with a probability that depends on deltaE and T

frontier

set of partial plans under consideration (all leaf nodes that are available for expansion)

state space

set of states reachable from the initial state by any sequence of actions

admissible

the heuristic function never overestimates the cost to reach the goal; i.e. 0 ≤ h(n) ≤ h*(n), where h*(n) is the true cost to the nearest goal

chronological backtracking search

uninformed search based on DFS, chooses values for one variable at a time and backtracks when there are no legal values left to assign

constraint satisfaction problem

use general-purpose heuristics rather than problem-specific to find solutions to problems; main idea is to eliminate large portions of the search space by identifying value/variable combinations that violate the constraints -X: set of variables -D: set of domains -C: set of constraints


संबंधित स्टडी सेट्स

"First Aid- Chapter 16: Poisoning"

View Set

SOS Math 800: Unit 4- Models and Scales

View Set

Chapter 3- cell structure and function

View Set

Cognitive Psychology 322 - Final Exam 4 (Syracuse University)

View Set

Entrepreneurship Chapter 2 Review

View Set