chapter 2: intelligent agents

Ace your homework & exams now with Quizwiz!

define a simple reflex agent

a simple reflex agent bases action on current percept only; by ignoring percept history, cut down on possibilities of actions. can also use the condition-action (if, then) rule sensors detect what the world is like, condition-action rules provide answer to the query "what action should I do now", then actuators act caveat: only works if correct decision can be made on basis of ONLY the current percept

how can one escape an infinite loop from a simple reflex agent?

agent can randomize its actions instead; infinite loops are often unavoidable for simple reflex agents in partially observable environments

define a goal-based agent

agent has goal information that describes situations that are desirable; program combines this w/ model to choose actions that achieve goal it consideres the future in aspects of "what will happen if I do XYZ" *and* "what will make me happy" flexible b/c knwoledge that supports its decisions is represented explicitly and can be modified (for example, reflex agent rules for when to turn only works for single destination) http://imgur.com/4u9R37x

what is the difference between the agent function and the agent program?

agent program just takes in the current percept as input b/c nothing more ias available from environment; function takes in entire percept history

define a percept

agent's perceptual inputs at any given instant

define an agent in terms of architecture+program

an agent is just a program that implements the agent function with architecture (computing device w/ physical sensors/actuators)

discrete and continous

applies to the state of the environment, to the way time is handled, and to the percepts and actions of the agent chess is discrete (finite number of distinct states, w/ discrete set of percepts and actions)

define the differences between atomic, factored, and structured representations

atomic: each state of the world is indivisible (search algs) factored representation: each state has a fixed set of variables or attributes, each of which can have a value structured representation: objects and their various/varying relationships can be described explicitly (everything affects each other)

why is an external performance standard required for utility based agents (in learning)?

b/c agent needs to learn from the external performance standard that there is a reward/penalty for its actions

why are agents in an environment that is completely known so susceptible to failure?

b/c they don't have to perceive or learn; they simply act correclty. they don't learn how to adapt to changing conditions

competitive vs. cooperative multiagent environment

competitive: maximizing one's gain while also minimizing the other's cooperative: maximizing both (possibly). communication emerges as a rational behavior in multiagent environment

define a percept sequence

complete hisotry of everything the agent has ever perceived: an agent's choice of action @ any given instant can depend on the entire percept sequence observed to date, but NOT on anything NOT perceived yet

deterministic vs. stochastic vs. nondeterminstic

deterministic: next state is completely determined by the current state and the action executed by the agent stochastic: there are probabilities associated with what happens in the environment nondeterministic: actions are characterizied by possible outcomes, but no probabilities are attached to them--usually associated w/ performance measures requiring agent to succeed in all possible outcomes caveat: if the environment is partially observable, then it could appear to be stochastic

what is information gathering?

doing actions to modify future percepts; provided by exploration (think vacuum cleaner). an agent would information gather by looking both ways before crossing a road

static vs. dynamic

dynamic: environment can change while an agent is deliberating--agent deciding counts as no action taking place static: the world does not change, agent does not need to keep on looking at the world. semidynamic: environment itself does not change w/ passage of time but agent's performance does (like chess w/ a clock)

episodic vs. sequential

episodic: agent's experience is divided into atomic episodes; each episode does not depend on the actions taken in previous episodes sequential: each action is reliant on a previous action (chess and taxi)

define a performance measure

evaluates any given sequence of environment states

define a fully observeable/partially/unobservable environment

fully: agent's sensors give it access to the complete state of the environment at each point in time; no need to maintain internal state to keep track of the world partially: obscured because parts of the state are simply missing from the sensor data; noisy and inaccurate sensors unobservable: no sensors at all

draw a model-based, utility-based agent

http://imgur.com/2Ic1WRI

draw a general learning agent and explain it

http://imgur.com/aLSfH1Z -the learning element is rresponsible for making improvements, changes the "knowledge" components of an agent like "how the world evolves" or "what my actions do" -the performance element is responsible for selecting external actions (it's what we previously considered to be the entire agent: it takes in percepts and decides on action) the critic gives feedback to the learning element problem generator: suggests actions that will lead to new experiences

define a model-based reflex agent

http://imgur.com/oT3z4ub THIS IS FOR A REFLEX AGENT (using models)

what is an appropriate performance measure for a vacuum robot?

is the floor clean or not (note: how much dirt picked up would be a bad measure)

why is a model important for an agent?

it can keep track of the part of the world it can't see now; maintains some sort of internal state depends on percept history. 1) we need info about how world evolves indepedently of agent 2) we need info about how agent's own actions affect world this is used to create the model

known vs. unknown

known: outcomes (or outcomes probabilities if the environment is stochastic) for all actions are given unknown: agent will have to learn how it works in order to make good decisions

define an environment class

many environments

agent function

maps any given percept sequence to an action

what is an uncertain environment

not fully observable/not deterministic

define a rational agent

one agent that does everything correctly--every entry in the table for the agent function is filled out correctly it selects an action that is expected to maximize its performance measure, given evidence provided by the percept sequence and whatever built-in knowledge the agent has

define PEAS

performance, environment, actuators, and sensors

define rationality (in comparison to omniscience)

rational maximizes *expected* performance, perfection maximizes actual performance

define an agent program

the agent function will be implemented by the program; the function is an abstract mathematical description, agent program is a concrete implementation

define omniscience

the agent knows the *actual* outcome of its actions and can act accordingly--impossible in reality

why do we use an environment state, NOT an agent state?

there is circulator reasoning by the agent--an agent could believe it's achieving its goal, but it really isn't ("I'm perfect in every way!") general rule: design performance measures according to what one really wants in the environment

define utility and the utility function

utility is the happiness gained utility function is an internalization of the performance measure; agent chooses actions to maximize its utility uitlity is good to decide among conflicting goals, and to rank goals (if there are several goals that the agent can aim for, none of which can be achieved w/ certainty)

when does an agent lack autonomy?

when it relies on prior knowledge of its designer than its own percepts. it should learn what it can to compenstate for partial/incorrect knowledge


Related study sets

Ch 52 Nursing Management: Patients w/ Dermatologic Problems

View Set

Consumer Behavior Exam II Chapter 9

View Set