AI: Chapter 2

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

learning agent

A learning agent can be divided into four conceptual components, as shown in Fig- ure 2.15. The most important distinction is between the learning element, which is re- sponsible for making improvements, and the performance element, which is responsible for selecting external actions. The learning element uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future. The last component of the learning agent is the problem generator. It is responsible for suggesting actions that will lead to new and informative experiences.

Utility- based agents

A more general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent

Fully observable vs partially observable

If an agent's sensors give it access to the complete state of the environment at each point in time, then we say that the task environment is fully observable. An environment might be partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data—for example, a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in other squares, and an automated taxi cannot see what other diners are thinking If the agent has no sensors at all then the environment is unobserv- able

Static vs. dynamic

If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent; otherwise, it is static

Deterministic vs. stochastic

If the next state of the environment is completely deter- mined by the current state and the action executed by the agent, then we say the environment is deterministic; otherwise, it is stochastic

Episodic vs. sequential

In an episodic task environment, the agent's experience is divided into atomic episodes. In each episode the agent receives a percept and then performs a single action. Crucially, the next episode does not depend on the actions taken in previous episodes.In sequential environments, on the other hand, the current decision could affect all future decisions.

goal-based agent

The agent program can combine this with the model (the same information as was used in the model- based reflex agent) to choose actions that achieve the goal

Discrete vs. continuous

The discrete/continuous distinction applies to the state of the environment, to the way time is handled, and to the percepts and actions of the agent. For example, the chess environment has a finite number of distinct states (excluding the clock), Chess also has a discrete set of percepts and actions. Taxi driving is a continuous-state and continuous-time problem: the speed and location of the taxi and of the other vehicles sweep through a range of continuous values and do so smoothly over time.

Single agent vs. multiagent:

The distinction between single-agent and multiagent en-

simple reflex agent

These agents select actions on the basis of the current percept, ignoring the rest of the percept history

rational agent

a rational agent should select an action that is ex- pected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

omniscient agent

knows the actual outcome of its actions and can act accordingly; but omniscience is impossible in reality.

model-based agent

the agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state

task environments

which are essen- tially the "problems" to which rational agents are the "solutions." Use PEAS (Performance, Environment, Actuators, Sensors) description


Set pelajaran terkait

ch6 installing a physical network

View Set

5-2 Properties of Quadratic Functions in Standard Form

View Set

13; The British Invasion Continues

View Set