AI CH2 - Agents and Environments

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Architecture

Computing device with sensors and actuators; and that makes the percepts from the sensors available to the program, runs the program, and feeds the program's action choices to the actuators as they are generated.

Goal Information

Describes situations that are desirable to the agent.

Atomic Representation

Each state of the world is indivisible-it has no internal structure.

Rational Agent Definition

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Fully observable

If an agent's sensors give it access to the complete state of the environment at each point in time, then we say that the task environment is fully observable.

Agent

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

Learning Agent

An agent that learns from its history and tries new experiments to improve itself. It has four components: a learning element, a performance element, a critic, and a problem generator.

Model-Based Agent

An agent that uses a model of the world - meaning it keeps track of how the world around it evolves and how it's own actions affect the world.

Percept Sequence

An agent's percept sequence is the complete history of everything the agent has ever perceived.

Cooperative multi-agent environment

An environment in which doing a task maximizes performance measures for all agents.

Competitive multi-agent environment

An environment in which when one agent maximizes their performance measure, they are minimizing the performance measure of another agent. See Chess.

Utility Function

An internalization of an agent's performance measure. If the internal utility function and the external performance measure are in agreement, then an agent that chooses actions to maximize its utility will be rational according to the external performance measure.

Omniscient Agent

An omniscient agent knows the actual outcome of its actions and can act accordingly; but omniscience is impossible in reality

What is rational depends on what?

- The performance measure that defines the criterion of success. - The agent's prior knowledge of the environment. - The actions that the agent can perform. - The agent's percept sequence to date.

Factored Representation

A factored representation splits up each state into a fixed set of variables or attributes, each of which can have a value.

Rational Agent

A rational agent is one that does the right thing—conceptually speaking, every entry in the table for the agent function is filled out correctly.

Static vs. Dynamic

If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent.

Semidynamic

If the environment itself does not change with the passage of time but the agent's performance score does, then we say the environment is semidynamic.

Deterministic vs. Stochastic

If the next state of the environment is completely determined by the current state and the action executed by the agent, then we say the environment is deterministic; otherwise, it is stochastic.

Performance Measure

If the sequence is desirable, then the agent has performed well. This notion of desirability is captured by a performance measure that evaluates any given sequence of environment states.

Episodic Task Environment

In an episodic task environment, the agent's experience is divided into atomic episodes. In each episode the agent receives a percept and then performs a single action. Crucially, the next episode does not depend on the actions taken in previous episodes. Many classification tasks are episodic. For example, an agent that has to spot defective parts on an assembly line bases each decision on the current part, regardless of previous decisions; moreover, the current decision doesn't affect whether the next part is defective.

Sequential Task Environments

In sequential environments, on the other hand, the current decision could affect all future decisions. Chess and taxi driving are sequential: in both cases, short-term actions can have long-term consequences. Episodic environments are much simpler than sequential environments because the agent does not need to think ahead.

Agent Program

Internally, the agent function for an artificial agent will be implemented by an agent program. It is important to keep these two ideas distinct. The agent function is an abstract mathematical description; the agent program is a concrete implementation, running within some physical system.

Agent Program

Maps percepts to actions

Agent Function

Mathematically speaking, we say that an agent's behavior is described by the agent function that maps any given percept sequence to an action.

Known vs Unknown

Refers to if the agent knows what/how to do things in the environment.

Learning Element

Responsible for making improvements

Performance Element

Responsible for selecting external actions. Takes in percepts and decides on actions.

Problem Generator

Responsible for suggesting actions that will lead to new and informative experiences.

Critic

Sends feedback to the learning element about how the agent is doing and determines how the performance element should be modified to do better in the future.

Structured Representations

States include objects, each of which may have attributes of its own as well as relationships to other objects.

Task Environment

Task environments are essentially the "problems" to which rational agents are the "solutions." Performance measure, the environment, and the agent's actuators and sensors are grouped under the task environment (PEAS description).

Internal State

That is, the agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state.

Discrete vs. Continuous

The discrete/continuous distinction applies to the state of the environment, to the way time is handled, and to the percepts and actions of the agent

Simple Reflex Agent

These agents select actions on the basis of the current percept, ignoring the rest of the percept history.

Goal-based Agents

These agents think about what future actions should be taken to achieve a goal.

Numbers of agents

This number is determined by the number of entities trying to maximize performance measure and depends on other agents.

Utility

This term is used to describe the 'happiness' of an agent based on how it reaches its goals.

Uncertain Environment

We say an environment is uncertain if it is not fully observable or not deterministic.

Percept

We use the term percept to refer to the agent's perceptual inputs at any given instant

Condition-action Rule

When a condition occurs, take an action. For example, if car-in-front-is-braking then initiate-braking.


Set pelajaran terkait

Chapter 9: Assessing the Head, Face, Mouth and Neck

View Set

Drug Therapy in Pediatric Patients (Ch. 10) NOTES

View Set

Chapter 18: Nutrition for Patients with Disorders of the Lower GI Tract & Accessory Organs

View Set

Where Is the Eiffel Tower questions

View Set

Therapeutic Interventions Exam 1

View Set

Week 10- Forward currency exchange markets

View Set