CSC 411 - Introduction to Artificial Intelligence Lecture 2 - Agents

¡Supera tus tareas y exámenes ahora con Quizwiz!

Actuators

Devices an agent uses to interact with its environment.

Sensors

Devices and agent uses to perceive its environment and develop a percept history.

Continuous

A big range, infinite, number of values

Discrete

A limited number of distinct, clearly defined percepts and actions.

Performance Measure (Utility Function)

A) An objective criterion for success of an agent's behavior. B) The goal an agent's results are measured against. This is used to determine an effective agent.

Agent

A) Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. B) = architecture + program.

Sequential

A) Current choice will affect future actions. B) The Agent has to plan ahead.

Fully Observable

A) Everything an agent requires to choose its actions is available to it via its sensors. B) The environment is fully accessible

Rational Agent

A) Given the information provided by the percept sequence and the agent's build in knowledge, selects a action that is expected to maximize its performance measure for each possible percept sequence. B) Takes actions it believes will achieve its goals.

Model-based Reflex Agent

A) Maintain some internal state that keeps track of the part of the world it can't see now. B) Needs model (encodes knowledge about how the world works including how agent's actions affect the world). C) Model base agents update their state.

Partially Observable

A) Parts of the environment are inaccessible. B) Agent must make informed guesses about world.

Simple Reflex Agent

A) Select action on the basis of current percept, ignoring all past percepts. B) Simple but very limited intelligence: Completely observable. C) Prone to infinite loops, can avoid by using randomized actions.

Utility-based Agent

A) The agent uses a utility function to evaluate the desirability of states that could result from each possible action. B) A utility function maps a state onto a real number which descries the associated degree of happiness.

Goal-based Agent

A) The agent uses goal information to select between possible actions in the current state. B) Uses knowledge about a goal to guide its actions. C) Very flexible agent that can be changed by simply changing the goals.

Dynamic

A) The environment can change while the agent is deliberating so the agent should/could consult the world when choosing. Alternative) The agent could also anticipate the change during deliberation or make decisions very fast

Stochastic (Non-deterministic)

A) There are aspects beyond the control of the agent. B) Utility functions (Performance measures) have to guess at changes in the world.

Single Agent

An agent is operating by itself in an environment.

Environment Types

Fully observable vs. Partially observable, Deterministic vs Stochastic, Episodic vs Sequential, Static vs Dynamic, Discrete vs Continuous, Single agent vs Multi-agent.

Perfection

Maximizes actual outcome. Would require an agent to be omniscient which is not feasible.

Rationality

Maximizes expected outcome.

Learning Agents

Performance element is what was previously the whole agent: -Input sensor -output action

PEAS

Performance measure, Environment, Actuators, Sensors

Agent Program

Produces the agent function by running on the physical architecture.

Learning Element

Responsible for improving the performance element with experience.

Unknown

The agent does not know the rules of the environment.

Known

The agent knows the rules of the environment.

Deterministic

The change in the world state depends only on the current state and agent's actions.

Episodic

The choice of current action is not dependent on previous actions.

Static

The environment doesn't change while the agent is deliberating over what to do.

Strategic

The environment is deterministic except for the actions of other agents.

Semidynamic

The environment itself does not change with the passage of time but the agent's performance score does.

Environment

The region an agent interacts with.

Multi-agent

There are many agents working together in an environment. These agents do not need to be the same thing(requires verification on second statement).

Autonomous

To be defined as _________ an agent's behavior must be determined by its own experience (with ability to learn and adapt).

Agent Function

maps from percept histories to actions.


Conjuntos de estudio relacionados

1.10: Peer Review and Safe Harbor Laws

View Set

ExamFX Ch. 1 Basic Insurance Concepts and Principles

View Set

LAB TEST: PHOTOSYNTHESIS Questions

View Set

Utah Life and Health Exam- Chapter 6: Federal Tax Considerations for Life Insurance and Annuities

View Set