AI Section 2 Intelligent Agents

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Agent

Anything that can be viewed as perceiving environment through sensors and acting upon that environment through actuators

Taxi Driving (Static/Dynamic)

Dynamic

Performance Measure

Evaluates the behavior of an agent in an environment

Known vs Unknown

First option: The outcome (or outcome probabilities) for all actions are understood Second Option: An agent will have to learn how it works in order to make good decisions

Goal-Based agents

In addition to the current state of the environment, a goal-based agent uses some sort of goal information that describes situations that are desirable. This type of decision making involves consideration of the future, both "WHat will happen if I do x?" and "will that make me happy?" Reflex agents map directly from percepts to actions, while goal-based agents have knowledge that can be modified to affect its decision making

fully observable vs partially observable

It is said to be the first option if the agent's sensors give it access to the complete state of the environment at each point in time. The sensors detect all aspects that are relevant to the choice of action. If the environment is this than the agent need not maintain any internal state to keep track of the world. It is the second option because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data. For instance if the vacuum agent had only a local dirt sensor, it cannot tell whether there is dirt in other square. Additionally, if there were no sensors at all

Deterministic vs. Stochastic

It is said to be the first type of environment if the next state is completely determined by the current state and the action executed by the agent. IT is said to be the second option if the next state of the environment is affected by forces outside the control of the agent, and the uncertainty is qualified in terms of probabilities.

Model-based reflex agents

Keep track of the part of the world they can't currently see. They model their environments by maintaining an internal state that depends on the percept history. Exactly how models and states are represented vary widely depending on the environment and technology. It is seldom possible for the agent to determine the current state of a partially observable environment exactly.

Uncertain Environment combines two things?

Not fully observable or not deterministic

Rational Agent

One that does the right thing - conceptually speaking every entry in the table for the agent function is filled out correctly.

Task Environment

PEAS Performance Measure Environment Actuators Sensors

Learning Agents

Performance Element: Takes in percepts and decides on actions Learning element: Imporves the performance element Critic: Provides feedback on how the agent is doing and determines how the performance element should be modified to do better in the future Problem Generator: Responsible for suggesting actions that will lead to new and informative experiences

Chess when played with a clock (Static/Dynamic)

Semidynamic

Percept

Something perceived or sensed by an agent

solving a crossword puzzle (Static/Dynamic)

Static

Discrete vs Continuous

The distinction applies to the state of the environment, the way time is handled and the percepts and actions of the agent.

semidynamic environment

The environment itself does not change with the passage of time but the agent's performance score does.

Autonomy

The extent that an agent relies on the prior knowledge of its designer rather than on its own percepts

Static vs Dynamic

The first option is simply the opposite of the second option. The second option the environment can change while an agent is deliberating.

episodic vs sequential

The first option's task environment the next episode does not depend on the actions taken in previous episodes. An agent that spots defective parts on an assembly line is episodic. The second option task environment the current decision could affect all future decisions. Chess is sequential.

Taxi Driving (Discrete/Continuous)

The state of the environment: Continuous The way that time is handled: Continuous The percepts and actions of the agent: Continuous

Chess (Discrete/Continuous)

The state of the environment: Discrete The way that time is handled: discrete (without a clock) The percepts and actions of the agent: Discrete

Simple reflex Agent

These agents select action s on the basis of the current percept, ignoring the rest of the percept history. They operate using a collection of condition-action rules, which is essentially a series of if statements. They are appropriate in environments where the correct decision can be made on the basis of only the current percept. (ie the environment is fully observable)

Nondeterministic

This type of environment is one in which the actions are characterized by their possible outcomes, but no probabilities are attached to them.

Utility-based agents

While goals provide a crude binary distinction between goal met and goal not met, utility agents allow a more general performance measure. An agent's utility function is an internalization of the performance measure that can indicate the degree to which a goal is met. Suitability

Percept Sequence

the complete history of everything the agent has ever perceived


संबंधित स्टडी सेट्स

PrepU Health Assess Ch. 3 Assignment 3

View Set

Pharmacology Central Nervous System Stimulants

View Set

Ch. 13 The Eye and Ear ABBREVIATIONS

View Set

Bio 181 final exam (units 13-16)

View Set