Chapter 2 - Intelligent Agents

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Percept Sequence

Complete history of everything the agent has ever perceived

Deterministic vs Non-deterministic

Deterministic - When the next state of the environment is completely determined by the current state and the action executed by the agent Non-deterministic - subsequent states not completely determined by current state and agent's actions ( most real complex situations - traffic is unpredictable )

Discrete vs Continuous

Discrete - finite number of states and discrete set of percepts and actions Continuous - continuous state and continuous time problems, actions are continuous (taxi driving - steering angle)

Static vs Dynamic

Dynamic - the environment can change while an agent is deliberating (Taxi-driving) Static - environment is constant while agent is deliberating (Crossword) Semidynamic - When the env is constant but the agent's performance score changes over time (chess when played with a clock) Static env's are easier for agents (less time pressure), and in dynamic indecision (dithering) is a decision to do nothing.

Circular flow of perceiving and acting:

Env -> Percepts -> Sensors -> Agent -> Actuators -> Actions -> Env

Episodic vs Sequential

Episodic - Agent's experience divided into atomic episodes, wherein each episode the agent receives a percept and performs a single action, and that subsequent episodes do not depend on the actions taken in previous episodes ( assembly line ) Sequential - Current decisions can effect future decisions; short-term actions can have long-term consequences (chess) Episodic simpler than Sequential as Agent does not need to think ahead

Rationality Criterion in a sentence

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Fully Observable vs Partially Observable

Fully - agent's sensors give access to complete state of environment; sensors detect all aspects that are relevant to the choice of action; relevance depends on performance measure Partially - Noisy & Inaccurate sensors, parts of state are missing from sensor data (short range sensors) Unobservable - No Sensors at all

Percept

Content an agent's sensors are perceiving

Task Environment Example: Image Analysis

1) Fully Observable 2) Single-Agent 3) Deterministic 4) Episodic 5) Semi 6) Continuous

Example of Vacuum Percept Sequence

(A, Clean) - Right, (A, Dirty) - Suck, (B, clean) - Left, (B, dirty) - Suck, (A, clean) ; (A, clean) - Right ... etc.

Task Environment Example: Chess with a clock

1) Fully Observable 2) Multi-agent 3) Deterministic 4) Sequential 5) Semi 6) Discrete

Task Environment Example: Crossword Puzzle

1) Fully Observable 2) Single Agent 3) Deterministic 4) Sequential 5) Static 6) Discrete

Task Environment Example: Poker

1) Partially Observable 2) Multi-Agent 3) Stochastic 4) Sequential 5) Static 6) Discrete

Agent

Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

Performance Measure - Rationality criteria #1

Defines Criterion for success

Single-agent vs Multi-agent

Entities that are agents - Entities behaviour that maximize a performance measure whose value depends on another agent's behaviour are agents Single - One agent (crossword puzzle) Multi - many agents (chess, opponent is Agent)

Rationality is the same as Omniscience (T/F)

False

Rational Agents must ___ information and ___ as much as possible from what it perceives

Gather, Learn

Known vs Unknown

Known - Agent's (or designer's) state of knowledge about the laws of physics of the environment - the outcomes for all actions are given Unknown - agent has to learn how the environment works in order to make good decisions

Omniscient Agent

Knows Actual Outcome of actions

Information gathering

Part of rationality and means doing actions in order to modify future percepts

Intelligent Agent

Receive Agents and perform actions

Sequence of Actions by the agent leads to ...

Sequence of States for the environment

Rest of Chapter 2

Wednesday Content

Rationality maximizes ___ performances, while perfection maximizes ___ performance

expected, actual

A rational agent is one that does

the right thing

Task Environment Example: Taxi Driving

1) Partially Observable 2) Multi-Agent 3) Stochastics 4) Sequential 5) Dynamic 6) Continuous

Task Environment Example: Medical Diagnosis

1) Partially Observable 2) Single-Agent 3) Stochastic 4) Sequential 5) Dynamic 6) Continuous

Four Criterion for Rationality

1) Performance Measure 2) Prior knowledge of environment 3) Actions an agent can perform 4) Percept sequence to date

Task Environment Dimensions and Properties

Observable - fully vs partially Agent - single vs multi Deterministic - Deter. vs non-deter. (stochastic) Episodic - Sequential vs Episodic Static - Dynamic vs static vs semi Discrete - Continuous vs Discrete

PEAS Description of Taxi Driver

Performance Measure - safe, fast, legal Environment - roads, traffic, police Actuators - Steering, accelerator, brake Sensors - speedometer, gps, radar

Task Environment Specifications (PEAS - stands for?)

Performance measure, Environment, Actuators, Sensors

An agent Lacks Autonomy if and only if

the Agent relies on the designer's percepts


संबंधित स्टडी सेट्स

ENSC Exam 2 (Biodiversity and Human Population)

View Set

Chapter 23 Physiologic and Behavioral Adaptations to the Newborn Lowdermilk

View Set

Chapter 3 Network Protocols and Communications

View Set

Standard Position & Coterminal Angles

View Set

Modifiers: Adjectives and Adverbs

View Set