Intelligent Agents
True/False: Agent's choice of action can depend on entire percept sequence
True
Agent ___________ and _____ in an environment
perceives and acts
agent
Agent is anything that perceives its environment through sensors and acts upon that environment through effectors.
Deterministic vs. Stochastic
Deterministic-processing times of the jobs are known Stochastic- processing times of the jobs can vary/are unknown (1/6 properties of environments)
Discrete vs. Continuous
Discrete: a finite number of percepts and actions that can be performed within it else it is called continuous environment. (1/6 properties of environments)
Safety is an example of which PEAS descriptor for a taxi?
*PERFORMANCE MEASURE (ie safe)* Environment (ie road) Actuators (ie steering) Sensors (ie camera) Other Perf. Measures include: fast, obey laws, reach destination, comfortable trip, maximize profits
Properties of Environments...(6)
1. Fully observable vs. partially observable - If sensors give access to complete state of environment 2. Deterministic vs. stochastic - If next state of environment is completely determined by current state and the action executed by the agent (Can't predict environment in stochastic) 3. Episodic vs. sequential - Experience divided into atomic episodes (perceiving and acting) - Next episode does not depend on previous episodes 4. Static vs. dynamic - Environment not change while agent is"thinking" 5. Discrete vs. continuous: Distinct, clearly defined percepts and actions (chess) 6. Single Agent vs. multi-agent - Solving a puzzle is single agent - Chess is competitive multi-agent environment
Basic Types of Agent Programs (5)
1. Simple reflex agents: Condition-action rules on current percept; Environment must be fully observable 2. Model-based reflex agents: Maintain internal state about how world evolves and how actions effect world 3. Goal-based agents: Use goals and planning to help make decision 4. Utility-based agents: What makes the agent "happiest" 5. Learning agents: Makes improvements
Reflex, model-based, goal-based, utility-based, learning
Agent types
Percept
Agent's perceptual inputs at any given instant
Autonomy
Autonomous behavior - Behavior is determined by its own experience Non-autonomous behavior - If no use of percepts (use only built-in knowledge), then system has no autonomy • A clock • But consider a clock that detects and sets to atomic clock, or adjusts to different time zones - All of its assumptions must hold - Certain animal behaviors A rational agent should be autonomous
Percept sequence
Complete history of everything agent has perceived
True/False: Specifying which action to take in response to any given percept sequence is a percept
False: Agent Function = Specifying which action to take in response to any given percept sequence Percept = Agent's perceptual inputs at any given instant
True/False: Agent's choice of action can depend on entire action sequence
False: Agent's choice of action can depend on entire *percept* sequence
True/False: Non-autonomous Behavior is determined by its own experience
False: Autonomous behavior - Behavior is determined by its own experience - Rational Agent should be autonomous Non-autonomous behavior - If no use of percepts (use only built-in knowledge), then system has no autonomy
True/False: Rationality = omniscience
False: Omniscient agent *knows actual outcome* of its actions and can *act accordingly* - Impossible in reality (though available in simulation) Rationality is concerned with *expected success* given what has been perceived - Can "explore" to gather more information
True/False: Rationality just depends on what we know about the environment and the actions the agent can perform.
False: While these are 2 things, there are also 2 more... Rationality depends on... - The performance measure that defines degree of success - Percept Sequence (everything seen so far) - What the agent knows about the environment - The actions that the agent can perform
For each possible percept sequence, do whatever action is expected to maximize its performance measure, using evidence provided by the percept sequence and any built-in knowledge
Ideal Rational Agent - Do actions in correct order
Rationality and Performance Measures lead to...
Ideal Rational Agent - Do actions in correct order
Rational VS Omniscient Agent
Omniscient agent *knows actual outcome* of its actions and can *act accordingly* - Impossible in reality (though available in simulation) Rational Agent is concerned with *expected success* given what has been perceived - Can "explore" to gather more information
Omniscient agent
Omniscient agent knows actual outcome of its actions and can act accordingly - Impossible in reality (though available in simulation)
Agent Percepts
Percept: Agent's perceptual inputs at any given instant Percept "sequence": Complete history of everything agent has perceived Agent's choice of action can depend on entire percept sequence
A way to evaluate the agent's success • Embodies the criterion for success of an agent's behavior
Performance Measure - Specifies numerical value for any environment history toward the goals - When to evaluate is also important (Timespan)
Consider an "automated taxi driver" (Total Recall) PEAS Description?
Performance Measure? - Safe, fast, obey laws, reach destination, comfortable trip, maximize profits Environment? - Roads, other traffic, pedestrians, weather, customers Actuators? - Steering, accelerator, brake, signal, horn, speak, display Sensors? - Cameras, microphone, sonar, speedometer, GPS, odometer, accelerometer, engine sensors, keyboard
PEAS Example Agent type: Satellite image analysis system
Performance Measures: Correct image classification Environment:Downlink from orbiting satellite Actuators: Display classification of scene Sensors: Color pixel arrays (cameras)
PEAS Example Agent type: Medical diagnosis system
Performance Measures: healthy patient, minimize lawsuits/costs: Environment: patient, staff, hospital Actuators: Display questions, tests, diagnoses, treatments, referrals Sensors: Keyboard entry of symptoms, findings, patient's answers
Specifies numerical value for any environment history toward the goals
Performance measure - A way to evaluate the agent's success • Embodies the criterion for success of an agent's behavior • When to evaluate is also important (Timespan)
The road is an example of which PEAS descriptor for a taxi?
Performance measure (ie safe) *ENVIRONMENT (ie road)* Actuators (ie steering) Sensors (ie camera) Other Environment factors include: other traffic, pedestrians, weather, customers
Steering is an example of which PEAS descriptor for a taxi?
Performance measure (ie safe) Environment (ie road) *ACTUATORS (ie steering)* Sensors (ie camera) Other Actuators include: accelerator, brake, signal, horn, speak, display
A camera is an example of which PEAS descriptor for a taxi?
Performance measure (ie safe) Environment (ie road) Actuators (ie steering) *SENSORS (ie camera)* Other sensors include: microphone, sonar, speedometer, GPS, odometer, accelerometer, engine sensors, keyboard
Structure of Intelligent Agents (2 parts)
Structure of Intelligent Agent = Architecture + Program 1. Architecture is the computing device • Makes sensor percepts available to the program • Runs the program • Feeds action choices to effectors 2. Program • Implements agent function mapping of percepts to actions The job of AI is to design Agent Programs - Though much current emphasis on embodiment
Program
Structure of Intelligent Agent = Architecture + Program Program • Implements agent function mapping of percepts to actions
True/False: The most complex environment is one that is inaccessible, non-deterministic, non-episodic, dynamic and continuous.
TRUE : google slides : Environments that are partially observable, stochastic, sequential, dynamic, continuous, and multi-agent are hardest (real world)
Ideal Mapping of Percepts to Actions
Table of actions in response to each possible percept sequence BUT - Simple table representation can be huge • For chess, the table would have 35^100 entries! - Takes too long to build the table => define mapping
The "problems" to which rational agents are the"solutions"
Task environment
Generic Agent Diagram
The agent uses its sensors (eye) to perceive its environment. It uses its effectors (X) to act upon the the environment. eye = SENSOR _____________________ [ | environment | ---> percepts --->[ eye _____________________ <--- actions <---X[ AGENT [ X X X=effectors
Partially Observable Environment
The agent's sensors do not have complete access to the state of the task environment
True/False: Rationality ≠ omniscience
True! Omniscient agent *knows actual outcome* of its actions and can *act accordingly* - Impossible in reality (though available in simulation) Rationality is concerned with *expected success* given what has been perceived - Can "explore" to gather more information
True/False: A rational action is measurable
True: Need a way to measure success => performance measure embodies the criterion for success of an agent's behavior
True/False: Must specify the setting for intelligent agent design
True: Task environments: The "problems" to which rational agents are the"solutions" • Multiple flavors of task environments -> Directly affects the design of the agent • PEAS description (P)erformance Measure (E)nvironment (A)cutators (S)ensors
True/False: A rational agent is one that does the right thing (to be most successful)
True: ie) every entry in the function table is filled out correctly
effectors
anything an agent uses to act upon its environment
sensors
anything that agent uses to perceives its environment through
Deterministic Environment
processing times of the jobs are known - opposite stochastic where process times vary or are unknown - property of environment
Agent = __________ + Program
Architecture
What does PEAS description stand for
(P)erformance Measure (E)nvironment (A)cutators (S)ensors
Agent design: PEAS
(P)erformance measure (E)nvironment (A)ctuators (S)ensors
Sensors and Effectors for Robot Agents
- Sensors: cameras, infrared range finders - Effectors: various motors
Sensors and Effectors for humans
- Sensors: eyes, ears, etc. - Effectors: hands, legs, mouth, etc.
episodic vs sequential
- Sequential environments require memory of past actions to determine the next best action. - Episodic environments are a series of one-shot actions, and only the current (or recent) percept is relevant. (1/6 properties of environments)
Rationality Depends on...(4)
1. The performance measure that defines degree of success 2. Percept Sequence (everything seen so far) 3. What the agent knows about the environment 4. The actions that the agent can perform
rational agent
A rational agent is one that does the right thing (to be most successful) ie) every entry in the function table is filled out correctly
Performance measure
A way to evaluate the agent's success • Embodies the criterion for success of an agent's behavior Specifies numerical value for any environment history toward the goals
What type of Agent can "explore" to gather more information? A. Omniscent B. Rational C. Vacuum D. Biological
A. Omniscient agent knows actual outcome of its actions and can act accordingly - Impossible in reality (though available in simulation)
Implements the agent function for an agent
Agent Program
Environment Examples: Taxi Driver 6 Properties of Environment
Automated TaxiDriver 1. Fully observable vs. *partially observable* 2. Deterministic vs. *stochastic* 3. Episodic vs. *sequential* 4. Static vs. *dynamic* 5. Discrete vs. *continuous* 6. Single Agent vs. *multi-agent*
Simple reflex agents
Condition-action rules on current percept; Environment must be fully observable
Environment Examples: CrossWord Puzzle 6 Properties of Environment
Crossword puzzle 1. *Fully observable* vs. partially observable 2. *Deterministic* vs. stochastic 3. Episodic vs. *sequential* 4. *Static* vs. dynamic 5. *Discrete* vs. continuous: Distinct, clearly defined percepts and actions (chess) 6. *Single Agent* vs. multi-agent
Agent environments
Environments that are partially observable, stochastic, sequential, dynamic, continuous, and multi-agent are hardest (real world)
Ideal Rational Agent
For each possible percept sequence, do whatever action is expected to maximize its performance measure, using evidence provided by the percept sequence and any built-in knowledge - Do actions in correct order
Fully observable vs. partially observable
Fully Observable: An agent's sensors give it access to the complete state of the environment at each point in time. Partially Observable: The agent's sensors do not have complete access to the state of the task environment (1/6 properties of environments)
Agent program
Implements the agent function for an agent - Runs on the agent architecture
Model-based reflex agents
Maintain internal state about how world evolves and how actions effect world
Agent's perceptual inputs at any given instant
Percept
Complete history of everything agent has perceived
Percept sequence
Agent = Architecture + _________
Program
The job of AI is to design Intelligent Agent __________
Programs
Agent types
Reflex, model-based, goal-based, utility-based, learning
- Sensors: eyes, ears, etc. - Effectors: hands, legs, mouth, etc.
Sensors and Effectors for Human Agents
- Sensors: cameras, infrared range finders - Effectors: various motors
Sensors and Effectors for Robot Agents
Agent function
Specifying which action to take in response to any given percept sequence - abstract mathematical description - Maps any given percept sequence to an action
Static vs. Dynamic
Static: the actions of an agent modify it. Dynamic: other processes are operating on it; environment changes while 'agent' is thinking (1/6 properties of environments)
Architecture
Structure of Intelligent Agent = Architecture + Program Architecture is the computing device • Makes sensor percepts available to the program • Runs the program • Feeds action choices to effectors
Task environments
The "problems" to which rational agents are the"solutions" • Multiple flavors of task environments -> Directly affects the design of the agent • PEAS description
Goal-based agent
Use goals and planning to help make decision
Utility-based agent
What makes the agent "happiest"
Single vs. Multi-Agent
When there is only one agent in a defined environment, it is named the Single-Agent System (SAS). This agent acts and interacts only with its environment. - Solving a puzzle is single agent If there is more than one agent and they interact with each other and their environment, the system is called the Multi-Agent System - Chess is competitive multi-agent environment (1/6 properties of environments)
What is the rational action for a particular circumstance?
Whichever action that will cause the agent to be most successful - given what has been seen/know Need a way to measure success: performance measure - "Whichever action maximizes the expected value of the performance measure given the percept sequence to date"
anything that perceives its environment through sensors and acts upon that environment through effectors.
agent
Maps any given percept sequence to an action
agent function - abstract mathematical description specifying which action to take in response to any given percept sequence
anything an agent uses to act upon its environment
effectors
Ideal agent takes the action that is...
expected to maximize the performance measure, given its percepts
Learning agents
makes improvements
A _______ _______ is one that does the right thing (to be most successful)
rational agent
anything that agent uses to perceives its environment through
sensors
Ideal Agent
takes the action that is expected to maximize the performance measure, given its percepts