AI Quiz 1
Heuristic function
"h(n)" = estimated cost of the cheapest path from the state at node "n" to a goal state.
Optimally efficient
A description for A* search that says that no other optimal algorithm is guaranteed to expand fewer nodes that A* because any algorithm that does not expand all nodes with f(n) < C* runs the risk of missing the optimal solution.
Performance measure
A function that maps a (percept, action) sequence to a real number.
Admissible heuristic
A heuristic that never overestimates the cost to reach the goal.
Goal test
A logical predicate that encompasses a set of goal states.
Local maximum
A peak that is higher than each of it neighboring states but lower than the global maximum.
Relaxed problem
A problem with fewer restrictions on the actions.
Search space
A sequence of states generated by the search operators applied successively to the initial state.
Population
A set of k randomly generated search that GAs (genetic algorithms) begin with.
Mutation
A single position in the string describing the chosen state is chosen and altered with some probability.
Fully observable
A task environment in which an agent's sensors give itself access access to the complete state of the environment at each point in time.
Search tree
A tree the agent generates while it searches.
Genetic algorithm
A variant of stochastic beam search in which successor states are generated by combining two parent states rather than by modifying a single state.
Gradient
A vector delta-f that gives the magnitude and direction of the steepest slope.
Search operators
Actions the agent has to change the state of the world.
Simple reflex agent
Agent function maps current state directly to action.
Sequential
Agent's current action affects future actions.
Episodic
Agent's current action does not affect future actions
Completeness
Algorithm always finds a solution if it exists?
Optimality
Algorithm always finds minimum cost solution?
Machine learning
Allows a computer to adapt to new circumstances an to detect and extrapolate patterns.
Robotics
Allows computer to manipulate objects and move bout.
Computer vision
Allows the computer to perceive objects
Rational agent
An agent whose agent function always acts to maximize its performance measure, given its percept sequence until the current moment.
Partially observable
An environment might be this because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data.
Best-first search
An instance of the general TREE-SEARCH or GRAPH-SEARCH algorithm in which a node is selected for expansion based on an evaluation function, "f(n)".
Gradient descent
Analog of Hill Climbing. From the current state, move in the negative gradient for minimization.
Greedy local search
Another name for hill climbing search. Very fast in most cases, needs very little memory. Can easily get stuck in local maximum/minimum.
Random-Restart Hill Climbing
Conducts a series of hill-climbing searches from randomly generated initial states, until a goal is found.
Iterative deepening search
DFS where we can iteratively increase the depth parameter, starting with zero. If DFS with current depth fails, increment and restart.
Goal depth
Depth of the shallowest goal in the search tree (d)
Expanding
Doing this to a state also finds the successors of its node.
Node
Each of these i search tree represents some state in the search space, with extra bookkeeping information.
Atomic representation
Each state of the world is indivisible. - it has no internal structure
Natural language processing
Enables a computer to communicate successfully in English
Continuous
Environment in which states, percepts, and actions are continuous
Discrete
Environment in which states, percepts, and actions are discrete
Uniform-cost search
Expand the node on the open list with the lowest path cost; otherwise same as BFS. Can be implemented using a priority queue.
Depth-first search
Expands deepest nodes on open list first. Can be implemented using a stack
Performance element
Fixed rules (or functions)
Consistency
For every node and successor in the search tree "(n, n')", "h(n)" < "c(n, n')" + "h(n')"
Path cost
Function that assigns a numerical cost to each path.
Learning element
Generates functions, based on its (partial) knowledge of the environment and any feedback it receives.
Critic
Gives feedback to the learning element on how the agent is doing, allowing the agent to determine how the element should be modified to do better in the future.
Artificial intelligence
How to understand and build intelligent entities.
Intelligence
How we think
Condition-action rule
IF "action" THEN "condition".
Pattern databases
If we solve a subproblem of the overall problem exactly, we can use its cost as the heuristic
Priority queue
Implements a uniform cost search
Queue
Implements the open list
Utility-based agent
Instead of binary goal, an agent could have fine-grained notion of how useful certain states are.
Stochastic beam search
Instead of choosing the best k from the pool of candidate successors, chooses k successors at random, with the probability of choosing a given successor being an increasing function of its value.
Open list
List of unvisited states
Closed list
List of visited (expanded) sttes
Model-based reflex agent
Maps current percept and "world model" to action
Goal-based agent
Maps current percept and knowledge of current goal to action.
Agent function
Mathematical description of an agent's behavior that maps any given percept sequence to an action.
Branching factor
Maximum number of successors of any node (b)
Space Complexity
Memory required to find a solution?
Plateau
Node and all neighbors have the same function value.
A* search
Nodes on open list with smallest "f(n)" = "g(n)" + "h(n)" are expanded first. Expand successors of initial state, choose one with lowest "f(n)" and expand it, etc.
Structured representation
Objects such as cows and trucks and their various and varying relationships can be described explicitly.
PEAS
Performance measure, Environment description, description of Actions, description of Sensors.
Optimization problem
Problem in which the aim is to find the best state according to an objective function.
Problem generator
Responsible for suggesting actions that will led us to new and informative experiences.
Idea of bidirectional search
Run simultaneous searches, checking for intersections between the open lists. If nonempty, path found, but if using BFS at each end, and goal depth is d, time and space complexity will be reduced to O(b^(d/2))
Local search
Search algorithms that do not need to optimize/trach the path. First they start at initial state and apply search operators. If all neighbors have lower function value or are in closed list, stop. Else, select neighbor and repeat.
Hill climbing
Search that always selects neighbor with largest function value greater than current.
Breadth-first search
Shallowest (lowest depth) nodes on open list are expanded first. First expand successors of initial state, then their successors, and so on.
Simulated annealing
Simulating the process used to temper or harden metals and glass by heating them to a high temperature and then gradually cooling them, thus allowing the material to reach a low-high temperature and then gradually fooling them, thus allowing he material to reach a low-energy crystallline state.
Step size
Small constant "alpha" that multiplies "delta"-f(x) in the formula that updates the current state
Agent
Software that interacts with he world
Factored representation
Splits up each state into a fixed set of variables or attributes, each of which can have a value.
Initial state
State of the world an agent starts at, generally represented atomically.
Knowledge representation
Stores what a computer knows or hears.
Uninformed/blind search
Strategies that have no additional information bout states beyond that provided in the problem definition
Informed/heuristic search
Strategies that know whether one non-goo state is "more promising" then another.
Percept
The agent's perceptual input at any given instant.
Effective branching factor
The branching factor a balanced search tree of depth "d" would have to contain "N" + "1" nodes. The better the heuristic, the closer to 1 this value will be.
Percept sequence
The complete history of everything the agent has ever perceived.
Fitness function
The function to be optimized
Problems with bidirectional search
The goal could be defined in terms of a predicate, and it constrains the operators
Global maximum
The highest peak of the state-space landscape.
Global minimum
The lowest valley in the state-space landscape.
Deterministic
The next state of the world is completely determined by current state and agent's action.
Stochastic
The next state of the world isn't completely determined by current state and agent's action.
Parent node
The origin node tht hs child nodes branching off of it.
Goal state
The state of the world the agent wants to get to.
Dynamic
The world can change even when the agent is inactive.
Static
The world does not change until the agent takes an action
Multiagent
The world has more than one agent in it
Single Agent
The world has only one agent in it
Time Complexity
Time taken to find a solution?
Crossover
Two states are chosen and a "crossover point" is marked. The operator then produces two new states.
Sensor
Used by an agent to perceive its environment.
Actuator
What an agent acts upon an environment through.
Child node
What branches off from the parent node
Current node
What local search algorithms operate with instead of multiple paths.
Environment
What the agent perceives.
Step cost
c(s, a, s')