Lecture 2 - Artificial Intelligence - Intelligent Agents

Ace your homework & exams now with Quizwiz!

agent =

architecture + program

Factored representation underly

constraint satisfaction algorithm propositional logic planning bayesian networks machine learning algorithms

The most effective way to handle ___________ is for the agent to keep track of the part of the world it can't see now

partial observability - the agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state.

An agent's choice of action at any given instant can depend on the entire _________ observed to date, but not on anything it has not perceived

percept sequence

A ___________ might have cameras and infrared range finders for sensors and various motors for actuators

robotic agent

4 basic agent program

1.Simple reflex agents 2. Goal-based agents 3. Utility-based agents 4. Model-based reflex agents

Discrete vs Continuous applies to

- state of the environment - the way time is handled - percepts and actions of the agent

Information gathering

-Performing actions in order to modify future percepts - An agent must explore the unknown environment - learn as much as possible from what is perceives - Exploration taken from the initially unknown environment

Properties of task environment

1. Fully observable vs partially observable 2. Single agent vs multi agent 3. Dynamic vs static 4. Episodic vs sequential 5. Discrete vs continuous 6. Known vs unknown 7. Deterministic vs stochastic

What is rational at a given time depends on

1. Performance Measure that defines criterion of success 2. Agent's prior knowledge of the environment 3. Actions the agent can perform 4. The agent's percept sequence to date

Model-based reflex agents

1. We need some information on how the world evolves independently of the agent 2. We need some information about how the agent's action will affect the world This knowledge - model of the world. An agent that uses a model is a model based reflex agent.

Difference between reflex agent and goal based agent

A reflex agent brakes when it sees brake lights. A goal based agent could reason if the car in front has break lights on, it will slow down.

Representing States - Structured representation

A state includes objects each of which may have attributes of its own as well as relationships to other objects

Non-deterministic environment

Actions are determined by possible outcomes but no probabilities attached to them Associated with performance measure that require the agent to succeed for all possible actions

Percept

Agent's perceptual inputs at any given instant

Fully observable

Agent's sensors give it access to the complete state of the environment at each point in time

Goal based agent - definition

An agent keeps track of the world state as well as a set of goals it is trying to achieve and chooses an action that will eventually lead to the achievement of its goal

Sequential Environment

Current decision could affect all future decisions

Representing States - Atomic representation

Each state of the world is indivisible, it has no internal structure

Performance Measure

Evaluates actions for any given sequence of environment states

Definition of rational agent

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure given the evidence provided by the percept sequence and whatever builtin knowledge the agent has.

Dynamic Environment

If the environment can change when the agent is deliberating

Semi-Dynamic Environment

If the environment itself does not change with the passage of time but the agent's performance score does

Deterministic

If the next state of the environment is completely determined by the current state and action executed by the agent

An agent can be omniscient in a small finite and closed system

Impossible in the real world

Utility-based agents - Definition

It uses a model of the world along with a utility function that measures its preferences among states of the world. Then it chooses the action that leads to be best expected utility where expected utility is computed by averaging overall possible outcome states weighted by the probability of the outcome

Static Environment

Not dynamic environment

Model-based reflex agents - definition

Keeps track of the current state of the world using an internal model. It chooses an action in the same way as the reflex agent.

Partially Observable

Noisy and inaccurate sensors, parts of the state are simply missing from the sensor data

Known environment

Outcome for all actions are given

Task Environment - PEAS

Problems to which rational agents are the solution Performance Measure Environment Actuators Sensors

Learning Element

Responsible for making improvements

Performance Element

Responsible for selecting external actions

_____ and ____ are subfileds of AI devoted to finding action sequences that achieve agent goals

Search and Planning

Simple reflex agent

Selects action based on current percept, ignores percept history i.e only when the environment is fully observable

Goal based agents

Sometimes it is not enough to know the current state of the environment to decide what to do. The agent needs some sort of goal information that describes situations that are desirable.

Representing States - Factored representation

Splits up each state into a fixed set of variables or attributes, each of which can have a value

Autonomy

The agent does not depend on prior knowledge of its designer and rather depends on its own percepts

Unknown environment

The agent has to learn how it works in order to make good decisions

Percept Sequence

The complete history of everything the agent has ever perceived

What does it mean to do the right thing?

When an agent is plunked down into an environment, it generates a sequence of actions according to the percept it receives. This sequence of actions causes the environment to go through a sequence of states. If the sequence is desirable, then the agent has performed well.

Agent function

abstract mathematical description - maps any given percept sequence to an action

Agent program

concrete implementation of agent function, running within some physical system

Agent program input

current percept because nothing else is available from the environment

Agent function input

entire percept history

Episodic Environment

episodic task environment is divided into atomic episodes In each episode the agent receives a percept and then performs a single action. Crucially, the next episode does not depend on the actions taken in previous episodes.

Rationality maximizes ________, perfection maximizes________

expected performance, actual performance

A rational utility based agent chooses the action that maximizes the __________ of the action outcomes

expected utility

Omniscient agent

knows actual outcomes of actions and can act accordingly but is impossible in reality

All agents can improve the performance through

learning

Stochastic

randomly determined

Structured representation underlie

relational databases and first order logic first order probability models knowledge based learning natural language understanding

If the agent's actions need to depend on some or all of the percept sequence, then the agent will have to _________ .

remember the percepts

An agent is anything that can be viewed as perceiving its environment through ________ and acting upon that environment through ________

sensors, actuators

A ___________ receives keystrokes, file contents, and network packets as sensory inputs and acts on the environment by displaying on the screen, writing files, and sending network packets.

software agent

Fully observable environments are convienient because

the agent need not maintain any internal state to keep record of the world

Utility based agents

try to maximize their own expected "happiness" 1. When there are conflicting goals, only some of which can be achieved, the utility function specifies the appropriate tradeoff 2. When there are several goals that the agent can aim for, none of which can be achieved with certainty, utility provides a way in which the likelihood of success can be weighed against the importance of the goals

An environment is _________ if it is partially observable or non-deterministic

uncertain

If an agent has no sensors at all, the environment is

unobservable

An agent's ________ is an internalization of performance measure

utility function

It is better to design performance measures according to

what one actually wants in the environment rather than how one thinks the agent should behave


Related study sets

NCLEX- Pediatrics Infectious & communicable diseases

View Set

N204 Midterm Exam Prep Questions

View Set

Physics Unit 2: Newton's Laws and Equilibrium (Khan Academy)

View Set

Unit 4 (Ch. 7/8: West Coast to Southern Gangsta Rap)

View Set

Nutrition - Chapter 4 Book Questions

View Set