AIMA 2nd Edition Chapter 2: Intelligent Agents

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

true

As a general rule, it is better to design performance measures according to what one actually wants in the environment, rather than according to how one thinks the agent should behave.

INFORMATION GATHERING

Doing actions in order to modify future percepts

software agents

(or software robots or softbots) exist in rich, unlimited domains.

randomize

Escape from infinite loops is possible if the agent can ____________ its actions.

definition of a rational agent

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

FULLY OBSERVABLE

If an agent's sensors give it access to the complete state of the environment at each point in time, then we say that the task environment is ________

learning

All agents can improve their performance through _________

partially observable

An environment might be ________ because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data

autonomous

A rational agent should be _______-it should learn what it can to compensate for partial or incorrect prior knowledge

semidynamic

If the environment itself does not change with the passage of time but the agent's performance score does, then we say the environment is _________

deterministic

If the next state of the environment is completely determined by the current state and the action executed by the agent, then we say the environment is __________

true

In general, an agent's choice of action at any given instant can depend on the entire percept sequence observed to date.

episodic

In this task environment, the agent's experience is divided into atomic episodes.

learn

Our definition requires a rational agent not only to gather information, but also to _________ as much as possible from what it perceives

simple reflex agent

The simplest kind of agent

simple reflex agent

These agents select actions on the basis of the current percept, ignoring the rest of the percept history

ARCHITECTURE

We assume this program will run on some sort of computing device with physical sensors and actuators called _________

Goal-based agents

act to achieve their goals

rational agent

acts so as to maximize the expected value of the performance measure, given the percept sequence it has seen so far.

architecture + program

agent = _____________ + ____________

rational agent

agent that does the right thing

agent function

agent's behavior is described by the ___________

Utility-based agents

allow a comparison of different world states according to exactly how happy they would make the agent if they could be achieved

Goal-based agents

as well as a current state description, the agent needs some sort of goal information that describes situations that are desirable

true

communication often emerges as a rational behavior in multiagent environments

problem generator

component of a learning agent that is responsible for suggesting actions that will lead to new and informative experiences.

PEAS (performance measure, environment, actuators, sensors)

description of the task environment

TABLE-DRIVEN-AGENT

does do what we want: it implements the desired agent function

learning elements

element that uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future.

performance measure

embodies the criterion for success of an agent's behavior

discrete

environment that has a finite number of distinct states, also has a discrete set of percepts and actions.

continuous

environment that may be representing continuously varying intensities and locations.

task environments

essentially the "problems" to which rational agents are the "solutions."

exploration

example of information gathering

utility function

function that maps a state (or a sequence of states) onto a real number, which describes the associated degree of happiness.

INTERPRET-INPUT function

generates an abstracted description of the current state from the percept

stochastic

if the environment is complex, making it hard to keep track of all the unobserved aspects

environment class

implementations of a number of environments, together with a general-purpose environment simulator that places one or more agents in a simulated environment, observes their behavior over time, and evaluates them according to a given performance measure. Such experiments are often carried out not for a single environment, but for many environments drawn from an ___________

agent program

implements the agent function mapping percepts to actions

sequential

in these environments, the current decision could affect all future decisions.

task environment specification

includes the performance measure, the external environment, the actuators, and the sensors. In designing an agent, the first step must always be to specify the task environment as fully as possible.

agent

is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators

omniscience/ omniscient agent

knows the actual outcome of its actions and can act accordingly but is impossible in reality

performance element

learning agent element that takes in percepts and decides on actions

model-based reflex agents

maintain internal state to track aspects of the world that are not evident in the current percept.

Learning agents

operate in initially unknown environments and to become more competent than its initial knowledge alone might allow

Simple reflex agent

respond directly to percepts

RULE-MATCH function

returns the first rule in the set of rules that matches the given state description

environment generator

selects particular environments (with certain likelihoods) in which to run the agent.

true

simple reflex agent will work only if the correct decision can be made on the basis of only the current percept-that is, only if the environment is fully observable.

condition-action rule

some established connection in the agent program to the action

agent function

specifies the action taken by the agent in response to any percept sequence

true

stochastic behavior is rational because it avoids the pitfalls of predictability.

agent program

the agent function for an artificial agent will be implemented by an _________

Model-based reflex agents

the agent that maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state.

percept

the agent's perceptual inputs at any given instant

percept sequence

the complete history of everything the agent has ever perceived

reward or penalty

the performance standard distinguishes part of the incoming percept as a ________ (or________) that provides direct feedback on the quality of the agent's behavior

dynamic

these environments are continuously asking the agent what it wants to do; if it hasn't decided yet, that counts as deciding to do nothing.

static

these environments are easy to deal with because the agent need not keep looking at the world while it is deciding on an action, nor need it worry about the passage of time.

utility-based agents

try to maximize their own expected "happiness."

function UPDATE-STATE

which is responsible for creating the new internal state description


Kaugnay na mga set ng pag-aaral

ECO: Test #2(Chapters 7, 8, and 9)

View Set

CHM116 Lab 6: Anions and Cations Pre-Lab Quiz

View Set

Chapter 6: Writing A Business Plan

View Set

Photosynthesis and Cellular Respiration Test

View Set

Med Surg; Chapter 52 - Sexually Transmitted Infections (3)

View Set