Intelligent Agents (2)

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

model-based, utility-based agent

- A model-based, utility-based agent uses a utility function, which is an internalization of the performance measure. - A rational utility-based agent chooses the action which maximizes the expected utility of the action outcomes.

info gathering

Doing actions in order to modify future percepts; to learn from what is perceived.

rational agent

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

stochastic task environment

A task environment is stochastic if the next state is not completely determined by the current state and previous actions executed by the agent.

percept

Agent's perceptual input at any given instant.

omniscient agent

An agent that knows the actual outcome of its actions and can act accordingly.

agent

Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

sequential task environment

A task environment is sequential if previous decisions affect current and future decisions.

percept sequence

- Complete history of agent's perception. - An agent's choice of action at any given instant can depend on the entire percept sequence, but not on anything it hasn't perceived.

learning agent

- Learning element: responsible for making improvements by using critic feedback on how agent is doing and determines how performance element should be modified to do better. - Performance element: responsible for selecting external actions. - Problem generator: responsible for suggesting actions that lead to new, informative experiences.

simple reflex agent

- Selects actions based on current percept only. - Reflex behavior occurs even in complex environments: condition-action rule. - Limited intelligence: only works if correct decision can be made on the basis of only current percept (i.e. only if the environment is fully observable).

model-based reflex agent

- Uses a model of the environment that keeps track of part of the environment that cannot be observed now (i.e. maintains internal state). - Maps current state estimate and knowledge of current goal to action.

semi-dynamic task environment

A task environment is semi-dynamic if the environment itself does not change with the passage of time, but the agent's performance score does.

deterministic task environment

A task environment is deterministic if the next state is completely determined by the current state and previous actions executed by the agent.

dynamic task environment

A task environment is dynamic if it is possible for a change in the environment to occur while the agent is deliberating.

episodic task environment

A task environment is episodic if it comprises of states that are independent atomic episodes in which the agent's decision is independent of others.

fully observable task environment

A task environment is fully observable if an agent's sensors can access the complete state of the environment at each point in time. Effectively fully observable if the sensors detect all aspects relevant (w.r.t. performance) to the choice of action.

nondeterministic task environment

A task environment is nondeterministic if actions are characterized by possible outcomes, but no probabilities attached to them; usually associated with the performance measure that requires an agent to succeed for all possible outcomes of its actions.

partially observable task environment

A task environment is partially observable if an agent's sensors can access the only a partial aspect of the environment at each point in time.

model-based, goal-based agent

In addition to the current state, a model-based, goal-based agent needs goal info that describes situations that are desirable, which can be combined with the model to achieve the goal.

uncertain task environment

Not fully observable or not deterministic.

task environment

Problems to which the rational agents are solutions. Includes: P.erformance measure E.nvironment A.ctuators S.ensors

known task environment

Refers not to the environment but to the agent's state of knowledge about how the environment behaves. An agent's state of knowledge of the environment is known if the outcomes (or probabilities thereof) of all actions are given.

unknown task environment

Refers not to the environment but to the agent's state of knowledge about how the environment behaves. An agent's state of knowledge of the environment is unknown if the outcomes (or probabilities thereof) of all actions are not given.

sensor

That which is used by an agent to perceive the world.

actuator

That which is used by the agent to act on the world.

architecture

The computing device with physical sensors and actuators that runs the AI program.

continuous (state of) task environment

The state of the task environment is continuous if there are infinitely many of them. Also applies to the way time, percepts, actions are handled.

discrete (state of) task environment

The state of the task environment is discrete if there are a finitely many of them. Also applies to the way time, percepts, actions are handled.


Kaugnay na mga set ng pag-aaral

Database Design Section 5 Study Guide

View Set

Math Lesson 56 Adding and Subtracting with Like Denominators

View Set

unit 3 progress check answers and explanations

View Set

chapter 37 test your understanding, knowledge, PS

View Set

Bio 221 Chapter 13: Altering the Genetic Material- Mutation, DNA Repair and Cancer

View Set

Unit 1.1 Eye of the Photographer

View Set