Chapter 2 Test 1 Vocabulary and Concepts

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Agent Structure Equation

Agent = architechture + program

PEAS: Single-agent Environment

"One-player" game agent, like for crossword. Does not interact with other agents. Only interacts with objects.

Agent Program

- A concrete implementation of the agent function running within some physical system. - This textbook is mainly concerned with teaching you to develop these.

Expressivity

- A more expressive representation can capture, at least as concisely, everything a less expressive one can capture, plus some more. Often, the more expressive language is much more concise. - On the other hand, reasoning and learning become more complex as the expressive power of the representation increases. - To gain the benefits of expressive representations while avoiding their drawbacks, intelligent systems for the real world may need to operate at all points along the axis [of increasing complexity and expressivity] simultaneously.

Pros of Utility-Based Agents

- A utility-based agent has many advantages in terms of flexibility and learning. - Furthermore, in two kinds of cases, goals are inadequate but a utility-based agent can still make rational decisions: 1) When there are conflicting goals, only some of which can be achieved (for example, speed and safety), the utility function specifies the appropriate tradeoff. 2) When there are several goals that the agent can aim for, none of which can be achieved with certainty, utility provides a way in which the likelihood of success can be weighed against the importance of the goals.

PEAS: Nondeterministic Environment

- Agent's actions are characterized by their possible outcomes, but NO probabilities are attached to them (contrast with Stochastic Env). - Nondeterministic environment descriptions are usually associated with performance measures that require the agent to succeed for all possible outcomes of its actions.

Summary of the Structure of Agents

- Agents are composed of a variety of components. - Those components can be represented in many ways within the agent program, so there appears to be great variety among learning methods. - Unifying theme among all learning methods is: Learning in intelligent agents can be summarized as a process of modification of each component of the agent to bring the components into closer agreement with the available feedback information, thereby improving the overall performance of the agent.

Examples That Use Atomic Representation

- Algorithms underlying search and game-playing - Hidden Markov models - Markov decision processes

PEAS: Sequential Task Environment

- An agent's current decision could affect all future decisions. - Short-term actions can have long-term consequences. - Examples include self-driving taxi and chess. - Sequential task environments are complex because the agent must think ahead to solve the problem at hand.

PEAS: Episodic Task Environment

- An agent's experience is divided into atomic episodes. - In each episode the agent receives a percept and then performs a single action. - The next episode does not depend on the actions taken in previous episodes. - Examples of agents operating in an episodic environment include "quality assurance" robot that scans for defective parts as they pass through an assembly line. - Episodic task environments are more simple because the agent does not need to think ahead to solve the problem at hand.

PEAS: Partially Observable Environment

- An agent's sensors do NOT give it access to the complete state of the environment at each point in time. - May be due to noisy or inaccurate sensors OR because parts of the environment state are missing from the sensor data.

PEAS: Fully Observable Environment

- An agent's sensors give it access to the complete state of the environment at each point in time. - Sensors detect all aspects relevant to the choice of action (relevancy depends on performance metric). - Agent does not need to maintain any internal state to keep track of the world. - An environment that is NOT fully observable is said to be "uncertain".

Utility Function

- An internalization of the performance measure that allows a comparison of different world states according to exactly how useful they would be for the agent (ie, according to their utility). - If the internal utility function and the external performance measure are in agreement, then an agent that chooses actions to maximize its utility will be rational according to the external performance measure. - An agent that possesses an explicit utility function can make rational decisions with a general-purpose [search] algorithm that does not depend on the specific utility function being maximized. In this way, the "global" definition of rationality—designating as rational those agent functions that have the highest performance—is turned into a "local" constraint on rational-agent designs that can be expressed in a simple program.

Advantages of Learning

- Building an agent capable of learning and then letting it teach itself (ie letting it program itself) is less work than actually programming one yourself. - Learning allows the agent to operate in initially unknown environments and to become more competent than its initial knowledge alone might allow.

PEAS: Agent Design for Multi-agent Environments

- Communication emerges as a rational behavior for cooperative agents. - Randomized behavior is rational for competitive agents because it avoids the pitfalls of predictability.

Cons of Simple Reflex Agents

- Create enormous lookup table (space, memory). - Only works in fully observable environments (ie only works if the current decision can be made on only the current percept). - Partially observable environments cause agents to enter into infinite loops, especially if the environment is deterministic. - Infinite loops can be escaped by randomizing agent actions. - a stochastic simple reflex agent may outperform a deterministic one, but in single-agent applications randomization is usually NOT rational, so you are more likely to have better performance from a more sophisticated type of deterministic agent (ie, one that is not a simple reflex type of agent).

Model-Based Reflex Agents

- Handles partially observable environments better than simple reflex agents do, by keeping track of the part of the world it can't see now. - To keep track, the agent should maintains an internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. - To update the internal state over time two things must be encoded in the agent program to tell it how the world works: 1) info on how the world evolves independently around the agent 2) info on how the agent's own actions affect the world. - The knowledge used to update the internal state is called the "model of the world", and describes how the world works. This set of "rules" is called the "model". - Regardless of the kind of model used, it is seldom possible for the agent to determine the current state of a partially observable environment exactly (rather, the current state is a "best guess").

PEAS: Semidynamic Environment

- If the environment itself does not change with the passage of time but the agent's performance score does. - Compare with Dynamic Environment. - Examples include chess, if played with a clock.

Utility-Based Agents

- Keeping track of the world state and providing an explicit goal may enable the agent to solve the problem, but it may not be the optimal solution. - Example is self-driving taxi: There are many routes that will take you to your destination, but some are faster, safer, etc. - Adding a utility function to the world state record-keeping and goal information allows a comparison of different world states according to exactly how useful they are to the agent. - Utility-based agents are good for handling the uncertainty inherent in stochastic or partially observable environments.

How the Learning Element Makes Changes to the Knowledge Components of the Agent Schematic Diagrams

- Learning element uses info from the percept sequence itself in the simplest cases: 1) Observation of pairs of successive states of the environment can allow the agent to learn "How the world evolves." 2) Observation of the results of its actions can allow the agent to learn "What my actions do." - Note that the above learning tasks are much more difficult if the environment is only partially observable.

PEAS: Known Environment vs. Unknown Environment

- Refers to the agent's or designer's state of knowledge about the "laws of physics" of the environment, rather than the environment's state, per se. - Known: The outcomes (or outcome probabilities if the environment is stochastic) for all actions are given. - Unknown: The agent will have to learn how the environment works in order to make good decisions.

Simple Reflex Agent

- Selects actions on the basis of the current percept, ignoring the rest of the percept history. - Examples include the vacuum agent whose agent function is tabulated in Figure 2.3 is a simple reflex agent, because its decision is based only on the current location and on whether that location contains dirt. - Decision-making does not involve consideration of the future. - Info about future actions and/or states is not explicitly represented in simple reflex agents because built-in condition-action rules map directly from percepts to actions. (Compare with goal-based model-based agents.)

Architecture

- The computing device that the agent program will run on (Could be anything from a robot with legs, or just a PC that runs software). - This device contains physical sensors and actuators. - The architecture makes the percepts from the sensors available to the program, runs the program, and feeds the program's action choices to the actuators as they are generated.

PEAS: Known / Unknown Environments vs Observable / Partially Observable Environments

- The distinction between known and unknown environments is NOT the same as the one between fully and partially observable environments. - It is quite possible for a known environment to be partially observable—for example, in solitaire card games, I know the rules but am still unable to see the cards that have not yet been turned over. - Conversely, an unknown environment can be fully observable—in a new video game, the screen may show the entire game state but I still don't know what the buttons do until I try them.

Expected Utility

- The expected utility of the action outcomes is the utility the agent expects to derive, on average, given the probabilities and utilities of each outcome. - A rational utility-based agent chooses the action that maximizes the expected utility of the action outcomes. - But any rational agent must behave as if it possesses a utility function whose expected value it tries to maximize, even if it doesn't explicitly have one.

How Agent Program Components Work

- There are various ways that the components can represent the environment that the agent inhabits. - These representations— atomic, factored, and structured— vary in their amount of complexity and expressivity. - The environment described in these representations consists of states and the transitions between states.

PEAS: Discrete vs Continuous Environment States

- This distinction applies to the state of the environment, to the way time is handled, and to the percepts and actions of the agent. - Discrete example is chess, which has a finite number of distinct states (excluding the clock), and a discrete set of percepts and actions. - Continuous example is self-driving taxi, which is a continuous-state and continuous-time problem: the speed and location of the taxi and of the other vehicles sweep through a range of continuous values and do so smoothly over time. Taxi-driving actions are also continuous (steering angles, etc.)

Pros of Simple Reflex Agents

- Very simple design. - Short block of code to implement agent.

Goal-Based Agents

- When knowing about the current world state alone isn't enough to determine what the "right thing" is for an agent to do, you can add in goal information. - The agent will use the same model, along with the current internal state info and the goal information, to choose actions that will achieve the goal. - Agent Does take the future into consideration during decision-making, since the knowledge that supports its decision-making (ie its model, which is its decision-making "rule set") is explicitly represented and can be modified.<— This makes Goal-Based Model-Based Agents very flexible. - Changing the goal modifies the decision-making rule set, which in turn allows the agent to change its behavior to achieve its goals on-the-fly.

PEAS: Dynamic Environment

- When the environment changes state while the agent is deliberating its next move. - Dynamic environments are continuously asking the agent what it wants to do; if it hasn't decided yet, then that counts as deciding to do nothing. - Are not easy to deal with because the agent must keep looking at the world while it is deciding on an action and must worry about the passage of time. - Examples include self-driving taxi.

PEAS: Static Environment

- When the environment does not change state while the agent is deliberating its next move. - Easy to deal with since the agent need not keep re-assessing the environment and doesn't need to worry about the passing of time. - Examples include crossword puzzles.

PEAS: Stochastic Environment

- When the next state of the environment is NOT completely determined by the current state and the action executed by the agent. - Uncertainties about outcomes are quantified in terms of probabilities. - Partially observable environments appear to be stochastic, and are treated as such since you can't realistically keep track of all the possible unobserved aspects.

PEAS: Deterministic Environment

- When the next state of the environment is completely determined by the current state and the action executed by the agent. - An environment that is NOT deterministic is said to be "uncertain".

Examples That Use Factored Representation

- constraint satisfaction algorithms - propositional logic - planning - Bayesian networks - machine learning algorithms

Examples That Use Structured Representation

- relational databases - first-order logic - first-order probability models - knowledge-based learning - much of natural language understanding

How a Utility-Based Agent Learns Utility Info From the External Performance Standard

- the performance standard distinguishes part of the incoming percept as a reward (or penalty) that provides direct feedback on the quality of the agent's behavior

PEAS: Double-agent Environment

-"Two-player" game agent, like for chess. - Interacts with other agents. - Examples are multiplayer games, self-driving taxis (interacting with other taxis), etc. - Can be competitive (one agent maximizes its own performance at the expense of the other), cooperative (one agent maximizes its performance while also maximizing the other agent's performance — or not getting in the way of its performance maximization), or a combination of the two (self-driving taxi is cooperative with other taxis to maximize performance of safety, but only one car can share a given space on the road at any given time, so in that regard is competitive).

Four Conceptual Components of a Learning Agent

1) The learning element, which is responsible for making improvements; it formulates (and modifies) the behavioral rules based on info from the critic. 2) The performance element, which is responsible for selecting external actions. (The performance element is what we have previously considered to be the entire agent: it takes in percepts and decides on actions.) 3) The critic, which provides the learning element with feedback on how the agent is doing and determines how the performance element should be modified to do better in the future. 4) The problem generator, which is responsible for suggesting exploratory actions that will lead to new and informative experiences; the idea being that if the performance element takes an action that is suboptimal in the short-run, it may lead to the discovery of better actions in the long-run.

Four Factors of Rationality: What is rational at any given time depends on:

1) The performance measure that defines the criterion for success. 2) The agent's prior knowledge of the environment. 3) The actions that the agent can perform. 4) The agent's percept sequence to date.

Environment Class

A set of simulated environments appropriate to the purpose of an agent, in which the designers can observe agent behavior over time, then evaluate agent success according to a given performance measure

Agent Function

An abstract mathematical way to describe an agent's behavior. It maps any given percept sequence to an action. - In table form, this is a condition-action table. - In equation (ie rule) form (or if-then clauses in code), this is a set of condition-action rules.

Autonomy

An agent has this when it learns from its percepts instead of relying on prior knowledge from its designer. - Autonomous agents can compensate for incorrect or partial prior knowledge. - Autonomous agents can, with enough experience, become independent of their prior knowledge. - Autonomy results in an agent that will be able to generalize to many different environments.

Percept

An agent's perceptural input at any given instant. This is an ELEMENT in a set.

Agent (AI)

An entity that perceives its environment through sensors and acts on its environment via actuators.

Information gathering

Doing actions in order to modify future percepts.

Definition of a Rational Agent

For each possible percept sequence, a rational agent should select an action that is ex- pected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Goal

Information that describes situations that are desirable in a "binary" fashion (ie, "good situation" or a "bad situation").

Performance Measure

Rationality is doing the "right thing", but "right" is defined by performance measures. - Performance measures capture the notion of the degree of desirability of a sequence of actions by evaluating any given sequence of enviroment states and assigning a score to it so an agent can more easily distinguish between more and less desirable ways of solving the problem.

Task Environments

The "problems" to which agents are the "solutions". - Also called the "PEAS description" (Performance, Environment, Actuators, Sensors). - Must be specified as completely as possible for appropriate agent program design.

By specifying _____________________, we can describe all there is to say about an agent.

The agent's choice of action for every possible percept sequence.

Percept Sequence

The complete history of everything the agent has ever perceived. This is a SET of logged percepts. This is what the agent has ALREADY seen; what it has not yet seen is NOT in this set.

An agent's choice of action at any given instant can depend on ____________________, but not on ____________________.

The entire percept sequence observed to date. / Anything it hasn't seen yet.

PEAS: Environment and Its Effect on the Design Problem

The more restricted the environment the easier the design problem.

Utility

The quality of being useful. Said of an agent in a given situation.(?)

Performance measures should be designed according to _______________ and not according to ________________.

What we want in the environment./ How we want the agent to behave. —In other words: design your penalties / rewards based on what you want the end result of the environment to be (ie clean floor), NOT what you want to see the agent do (ie, suck up dirt).

PEAS: Unobservable Environment

When an agent has no sensors at all.

Search and Planning

When more than one action is required for an agent to reach a goal, it must engage in search and planning to finding action sequences that achieve the agent's goals

Four Basic Kinds of Agent Programs

• Simple reflex agents; • Model-based reflex agents; • Goal-based agents; and • Utility-based agents


संबंधित स्टडी सेट्स

CEH All Chapters Practice Questions

View Set

Chapter 29: Growth and Development of the Adolescent - ML6

View Set

Chapter 19 section 1 - Europeans Explore the East

View Set

Project Management Exam 1 practice

View Set

SPM Speaking - Pros and Cons of Working Part Time as a Student

View Set

NUR415 Remediation Qs (Session 1- Bleeding & Cardiovascular)

View Set

Macroeconomics 1040 Final Exam pt.2

View Set

Hematology & Immunology Peds NCLEX ?'s

View Set