AI, Ch 2

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

To the extent that an agent relies on the prior knowledge of its designer rather than on its own percepts, we say that the agent lacks ______________.

autonomy A rational agent should be autonomous—it should learn what it can to compensate for partial or incorrect prior knowledge.

DFS

- starts at the root node (selecting some arbitrary node as the root node in the case of a graph) - explores as far as possible along each branch before backtracking - can use a priority que preorder, inorder, and post order traversal are types of DFS for trees

What is rational at any given time depends on four things:

1. The performance measure that is the criterion for success 2. The agent's prior knowledge of the environment 3. The actions the agent can take 4. The agent's percept sequence to date

learning element

1. critic: uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future. 2. problem generator: It is responsible for suggesting actions that will lead to new and informative experiences.

the learning element, which is responsible for _______________________, and the performance element, which is responsible for _______________________

1. making improvements 2. selecting external actions.

information gathering

Doing actions in order to modify future percepts is an important part of rationality. Like looking both ways before crossing the street.

definition of a rational agent

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

performance measure

It evaluates any given sequence of environment states, if the sequence is desirable than the agent did well. NOT agent states! When an agent is plunked down in an environment, it generates a sequence of actions according to the percepts it receives. This sequence of actions causes the environment to go through a sequence of states.

Uniform Cost Search

Similar to Dijkstra, from the starting state we will visit the adjacent states and will choose the least costly state then we will choose the next least costly state from the all un-visited and adjacent states of the visited states, in this way we will try to reach the goal state. We will keep a priority queue which will give the least costliest next state from all the adjacent states of visited states

The four basic kinds of agent programs

Simple reflex agents; Model-based reflex agents; Goal-based agents; and Utility-based agents.

Known vs. unknown task environment

Strictly speaking, this distinction refers not to the environment itself but to the agent's (or designer's) state of knowledge about the "laws of physics" of the environment. It is quite possible for a known environment to be partially observable—for example, in solitaire card games

task environments

The "problems" to which rational agents are the "solutions." This includes: the performance measure, the environment, and the agent's actuators and sensors or PEAS (Performance, Environment, Actuators, Sensors)

agent function vs agent program

The agent function is an abstract mathematical description; the agent program is a concrete implementation, running within some physical system.

Does an agent A (the taxi driver for example) have to treat an object B (another vehicle) as an agent, or can it be treated merely as an object behaving according to the laws of physics, analogous to waves at the beach or leaves blowing in the wind?

The key distinction is whether B's behavior is best described as maximizing a performance measure whose value depends on agent A's behavior. Avoiding collisions maximizes the performance measure of all agents, so it is a partially cooperative multiagent environment.

agent = __________ + program

architecture ( the program will run on some sort of computing device with physical sensors and actuators)

Three ways to represent how AI components work

atomic factored structured

experiments are often carried out not for a single environment but for many environments drawn from an_______________

environment class.

After sufficient experience of its environment, the behavior of a rational agent can become effectively _________________. Hence, the incorporation of learning allows one to design a single rational agent that will succeed in a vast variety of environments.

independent of its prior knowledge

Nondeterministic task environment

is one in which actions are characterized by their possible outcomes, but no probabilities are attached to them. in comparison "stochastic" generally implies that uncertainty about outcomes is quantified in terms of probabilities

percept sequence

is the complete history of everything the agent has ever perceived

the number of atoms in the observable universe is

less than 10^80

The hardest task environment is

partially observable, multiagent, stochastic, sequential, dynamic, continuous, and unknown

An agent is anything that can be viewed as perceiving its environment through ___________ and acting upon that environment through ___________.

sensors and actuators

percept

the agent's perceptual inputs at any given instant

an agents utility function is an internalization of ...

the performance measure. If the internal utility function and the external performance measure are in agreement, then an agent that chooses actions to maximize its utility will be rational according to the external performance measure.

expected utility

the utility the agent expects on average, given the probabilities and utility of each outcome

utility-based agents

they go one step beyond goal based agents, because they compare different world states to see which would make them the happiest it has to model and keep track of its environment they are good if there are conflicting goals and when there are several goals that the agent can aim for, none of which can be achieved with certainty,

goal-based agent

this agent needs goal information that describes desirable states and it involves consideration of the future, but it is limited because it only has a binary distinction of happy or unhappy states search and planning are subfields

We say an environment is _________________ if it is not fully observable or not deterministic.

uncertain

As a general rule, it is better to design performance measures according to _______________.

what one actually wants in the environment, rather than according to how one thinks the agent should behave.

A rational agent is one that does the right thing—conceptually speaking, every entry in the table for the agent function is ...

filled out correctly

BFS

- It starts at the tree root and explores all of the neighbor nodes - Level order traversal is an example of BFS

Fully observable vs. partially observable task environment

If an agent's sensors give it access to the complete state of the environment at each point in time, then we say that the task environment is fully observable. A task environment is effectively fully observable if the sensors detect all aspects that are relevant to the choice of action; relevance, in turn, depends on the performance measure. An environment might be partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data.

Static vs. dynamic task environment

If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent; otherwise, it is static

Deterministic vs. stochastic task environment

If the next state of the environment is completely determined by the current state and the action executed by the agent, then we say the environment is deterministic; otherwise, it is stochastic

Episodic vs. sequential task environment

In an episodic task environment, the agent's experience is divided into atomic episodes. In each episode the agent receives a percept and then performs a single action. Crucially, the next episode does not depend on the actions taken in previous episodes.

Discrete vs. continuous task environment

It applies to the state of the environment, to the way time is handled, and to the percepts and actions of the agent. The chess environment has a finite number of distinct states (excluding the clock). Chess also has a discrete set of percepts and actions.

__________________ are usually associated with performance measures that require the agent to succeed for all possible outcomes of its actions

Nondeterministic environments

simple reflex agents

Only react to the current percept and ignore the rest of the percept history. These often have condition-action rules ("if" this happens "do" that) If the world is not entirely observable then it can get into infinite loops

model-based agent

Uses a model of the world to make decisions when the environment is not fully observable. The agent maintains an internal state that depends on the percept history and reflects some of the unobserved aspects of the current state.

Atomic representation

each state of the world is indivisible—it has no internal structure.

An omniscient agent

knows the actual outcome of its actions and can act accordingly

An agent's behavior is described by the agent function that ...

maps any given percept sequence to an action


Kaugnay na mga set ng pag-aaral

Ch. 7: Managing Stress & Emotions

View Set

"Psychiatric/Mental Health Nursing - Psychobiological Disorders + Foundations"

View Set

Republican, Democratic, Third & Green Parties Review

View Set

Project Management Chapter3 Key Terms

View Set

Obesity and Pharmacology Review- PREP U

View Set