Chapter 2 Intelligent Agents

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

What is the formal definition of a rational agent?

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

What is the basic agent program skeleton do, and what is a table-driven-agent?

All agent programs have the same skeleton, they take the current percept as input and return an action to the actuators. A simple table-driven-agent appends a given percept to a percept sequence and returns an action from a lookup table based on the percept sequence.

What is an agent and what are a software agent's features?

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. A software agent receives file contents, network packets, and human input as sensory inputs and acts on the environment by writing files or displaying information.

What kinds of agents can learn, and what are the elements of a learning agent?

Any type of agent (model-based, goal-based, utility-based, reflexive) can be built as a learning agent, or not. There are two elements of the learning agent, the learning element, which is responsible for making improvements, and the performance element, which is responsible for selecting external actions.

What are the four things that a rational decision at any given time depends on.

1. The performance measure that defines the criterion of success. 2. The agent's prior knowledge of the environment. 3. The actions that the agent can perform. 4. The agent's percept sequence to date.

What is a deterministic, nondeterministic, and stochastic environment?

A deterministic environment is one in which the next state of the environment is completely determined by the current state and action executed. The environment is stochastic if it explicitly deals with probabilities and nondeterministic if the possibilities are listed without being quantified.

What is a goal-based agent, and what are the subfields of AI devoted to finding its actions?

A goal-based agent can combine information about the desired goal with the model of the model-based agent to choose actions that achieve a goal. Seach and Planning are the subfields of AI that are devoted to finding action sequences that achieve the agents goal.

What is a percept and what does the agent use to take an action?

A percept is the content an agent's sensors are perceiving. Percept sequence is the complete history of everything the agent has perceived. An agent's choice of action at any given instant can depend on its built-in knowledge and on the entire percept sequence observed to date.

What is an episodic and sequential task environment?

A task environment is episodic if the next episode does not depend on previous episodes, but only the current. A task environment is sequential if current actions have long term consequences.

What is a fully observable environment and why is it convenient?

If an agent's sensors give it access to the complete state of the environment at each point in time, then we say the task environment is fully observable. If the task environment is fully observable, an agent need not maintain any internal state to keep track of the world.

What are simple reflex agents and how do they work?

Simple reflex agents select an action based on the current percept only, ignoring the past.

When do simple reflex agents work and not work, and what kinds of problems do they encounter and how can these problems be solved?

Simple reflex agents work only if the correct decision can be made on the basis of just the current percept, that is if the environment is fully observable. Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. An agent can randomize its actions to exit an infinite loop, but this is really only good for competitive multi-agent environments.

What is a task environment and how do we describe it?

Task environments are essentially the problems to which rational agents are the solution. We specify a task environment using the PEAS description; Performance, environment, actuators, and sensors.

What is the key challenge for AI and what are the four basic kinds of agents?

Teh key challenge for AI is to find out how to write programs that, to the extent possible, produce rational behavior from a smallish program rather than a vast table. The four basic agents are: simple reflex, model-based, goal-based, and utility-based.

What is the agent function?

The agent function maps any given percept sequence to an action.

What is the most effective way to handle partial observability and how is this done?

The most effective way to handle partial observability is for the agent to keep track of the part of the world it can't see now. The internal state kept by a model-based reflect agent keeps track of the effects of the agent's actions, and how the world evolves independently of the agent.

What is the performance element and learning element and what is the problem generator?

The performance element is what we consider to be the entire agent, whether utility-based, goal-based, etc. The learning element uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future. The problem generator is the component of the learning element that suggests suboptimal actions for exploratory learning.

What is utility, and utility function?

The word utility is used to describe the desirability of a state. An agent's utility function is essentially an internalization of the performance measure.

What are two cases in which goals are inadequate, and how do we make a rational decision when they are?

There are two cases in which goals are inadequate, but a utility function can be used make rational choices: when goals are conflicting, such as speed and safety, and second, when there are several goals an agent can aim for, utility provides a way in which the likelihood of success can be weighed against the importance of the goals

What does it mean to say an agent performs well and how do we create a system that performs well?

We say an agent has performed well if a sequence of actions causes the environment to go through a desirable sequence of states. We design a performance measure according to what we want to be achieved in the environment, rather than how we think the agent should behave.

What is the first step to designing an agent?

When designing an agent, the first step must always be to specify the task environment as fully as possible


Ensembles d'études connexes

The three types of cartilage and their functions and locations

View Set

Chapter 24 Homework: The Digestive System

View Set

Consumer Behavior Chapter 10 quiz questions

View Set

U.S. History Chapter 18 Vocabulary nnb

View Set

CRC Fa22 CISN 304 Quiz 1 (Ch 1-3): Basic Network Connectivity and Communications

View Set