AI 2 - Intelligent Agents
percept sequence
A ________ ________ is the complete history of everything the agent has ever percieved
evolves independently, actions, world
A model based reflex agent requires two types of knowledge to be modeled: -Information about how the world _________ _________ -information about how the agent's _______ affect the ________
competitive multiagent
A multi-agent environment in which actions of one agent affects the actions of other agents
cooperative multiagent
A multi-agent environment in which agents work together for a common cause
Agent function
Abstract mathematical concepts that describe an agent
actuators
An agent acts upon the environment using ________
behavior, maps, action
An agent function describes the agent's ________ and ________ any given percept sequence to a(n) _________
sensors
An agent perceives its environment using ________
goal based agent
An agent that asks "what will happen if I do this" and "will that action make me happy?" - need some kind of information that describes desireable situations
simple reflex agent
An agent that selects actions based on the current percept and ignores the percept history
model based reflex agent
An agent that tracks part of the world that it can't see. It maintains an internal state that depends on percept history and reflects some unobserved aspects of the current state
utility based agent
An agent that uses an internal function that agrees with an external performance measure to maximize its usefullness.
partially observable environment
An environment in which an agent's sensors do not give it access to the complete state of the environment at each point in time
fully observable environment
An environment in which an agent's sensors give it access to the complete state of the environment at each point in time
Single Agent environment
An environment in which one agent operates alone in the environment
Multiple Agent environment
An environment in which several agents operate in the environment
known environment
An environment in which the agent knows all the outcomes for all actions or their probabilities - requires no learning
unknown environment
An environment in which the agent must learn how the environment works in order to make good decisions
episodic environment
An environment in which the agent perceives the environment and executes a single action, the choice of each action only depends on the current state.
sequential environment
An environment in which the current decision could affect all future decisions
deterministic environment
An environment in which the next state is completely determined by the current state
stochastic environment
An environment in which the next state is not completely determined by the current state
dynamic environment
An environment that can change while the agent is deliberating
semidynamic environment
An environment that does not change but the agent's performance score does change over time
static environment
An environment that does not change while the agent is deliberating
discrete environment
An environment that has a limited number of distinct, clearly defined percepts and actions. Applies to the way time is handled
continuous environment
An environment that has no clear division of percepts and actions along time. Applies to the way time is handled
Agent program
Concrete implementation of an agent
table, percept sequences, behaviors
In external characterization, can create a ______ given an agent, that describes the agent based on its _________ _________ and __________
agent program, agent function
In internal characterization, the ______ _______ is the internal implementation of the _______ _______
percept
The agent's perceptual inputs at any given instant
Environment
The problem space
Performance Measure
What the agent wants to maximize