Chapter 2: Intelligent Agents

¡Supera tus tareas y exámenes ahora con Quizwiz!

Real world environment

partially observable, stochastic, sequential, dynamic, continuous, multiagent

Single Agent(vs mulitagent)

Agent operating by itself in an environment

Strategic

Deterministic environment except it depends on actions of OTHER agents

Discrete (vs continuous)

A limited number of distinct, clearly defined percepts and actions Chess environment has finite number of distinct states and has a discrete step of percepts and actions Taxi driving is a continuous state & time problem

Simple reflex agent

Actions are selected on the basis of the current percept, ignores percept history

Episodic (vs sequential)

Agent experience is divided into episodes where agent is perceiving and then performing a single action. The choice of action depends only on the episode itself

Fully Observable(vs partially observable)

An agent's sensors give it access to the complete state of the environment at each point in time

Goal-based agent

Basically works the same as model based but it can now think ahead along with thinking back. Thinking ahead as in "What would happen if i do this action to the environment". It's goals are what primarily determines next action

Static(vs dynamic)

Environment is unchanged while an agent is deliberating Semi-dynamic - environment itself does not change w/ the passing of time but the agent's performance score does Taxi driving is dynamic Chess w/ clock is semi dynamic Crossword puzzles are static

Environment Types

Fully Observable(vs partially observable) Deterministic(vs stochastic) Episodic (vs sequential) Static(vs dynamic) Discrete (vs continuous) Single Agent(vs mulitagent)

Model-based reflex agent

Has memory and knows past actions and its effects, uses this along with environment condition to determine next action

Agent Types

Simple reflex agent Model-based reflex agent Goal-based agent Utility-based agent

Deterministic(vs stochastic)

The next state of the environment is completely determined by the current state & action carried out by agent

Performance Measure

an objective criterion for success of an agent's behavior

PEAS (Performance measure, Environment, Actuators, Sensors)

characterizes the problem an agent must solve

Agent Function

maps percept sequences to actions

Agent Program

the job of AI is to design this so that it implements the agent function

Utility-based agent

utility function is an internalization of the performance measure. When thinking ahead it considers the performance measure and whether or not its happy with it


Conjuntos de estudio relacionados

PSYCH 210 TopHat Questions Chapter 12-18

View Set

Adults 1 - Final, Final adult 1 .exm

View Set

Microeconomics Unit 2 Test Questions

View Set

CH 43: assessment of Renal and Urinary Tract Functions

View Set

Pharm Chapter 49 - Drugs that treat Anemia

View Set

Chapter 1: Internal Combustion Engines

View Set

IP Chapter 3: International Relations Theories

View Set