AI mid-term I

¡Supera tus tareas y exámenes ahora con Quizwiz!

Agent

architecture + program

What is aim of AI

find a way to implement the rational agent function concisely

Search strategies that know whether one non-goal state is expected to be better than another are called

informed search or heuristic search

General uninformed search stragtegies

1. Breadth-first search 2. Uniform-cost search 3. Depth-first search 4. Depth-limited search 5. Iterative deepening search

Properties of Task Environments

1. Fully observable (vs. partially observable) 2. Deterministic (vs. stochastic) 3. Episodic (vs. sequential) 4. Static (vs. dynamic) 5. Discrete (vs. continuous) 6. Single agent (vs. multiagent)

Discrete (vs. continuous):

A limited number of distinct, clearly defined percepts and actions

What is well-defined problems?

A problem can be defined formally by five components: Initial state Actions Transition model: description of what each action does (successor) Goal test Path cost

Agent Program vs. Agent Function

Agent program takes the current percept as input -- Nothing is available from the environment Agent function takes the entire percept history -- To do this, remember all the percepts

Episodic (vs. sequential)

Agent's experience is divided into atomic "episodes" Choice of action in each episode depends only on the episode itself

what is Agents

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

what is autonomous in AI sense?

An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)

Single agent (vs. multiagent)

An agent operating by itself in an environment Competitive vs. cooperative

what is Rationality?

An agent should "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful.

Fully observable (vs. partially observable)

An agent's sensors give it access to the complete state of the environment at each point in time.

Table-Driven Agent

Designer needs to construct a table that contains the appropriate action for every possible percept sequence

what is Rational Agent

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Explored set

Goal of explored set: contains explored states to prevent for repeated exploration

Frontier

Goal of frontier: Contain the set of discovered node(s) to expand (explore)

Simple Reflex Agent's issues

Infinite loops are often unavoidable for simple reflex agent operating in partially observable environments ■No location sensor Randomization will help A randomized simple reflex agent might outperform a deterministic simple reflex agent Better way: keep track of the part of the world it can't see now ■Maintain internal states

what is PEAS

Performance, Environment, Actuators, Sensors Specifying the task environment is always the first step in designing agent

Problem Solving Agent

Problem-solving agent: a type of goal-based agent Decide what to do by finding sequences of actions that lead to desirable states Goal formulation: based on current situation and agent's performance measure Problem formulation: deciding what actions and states to consider, given a goal The process of looking for such a sequence of actions is called search

Problem Abstraction

Real world is complex and has more details Irrelevant details should be removed from state space and actions, which is called abstraction

Five Basic Agent Types

Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents Learning agents

Five Basic Agent Types

Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents; and Learning agents

CHILD NODE Function

The CHILD-NODE function takes a parent node and an action and returns the resulting child node function CHILD-NODE(problem, parent, action) return a node return a new node with( STATE = problem.RESULT(parent.STATE, action) PARENT = parent ACTION = action PATH-COST = parent.PATH_COST + problem.STEP_COST(parent.STATE, action) )

agent program

The agent program runs on the physical architecture to produce f

Static (vs. dynamic):

The environment is unchanged while an agent is deliberating Semi-dynamic if the environment itself doesn't change with time but the agent's performance score does

The job of AI?

The job of AI is to design the agent program that implements the agent function mapping percepts to actions

A perfectly rational poker-playing agent never loses.

True

It is possible for a given agent to be perfectly rational in two distinct task environments.

True

Suppose an agent selects its action uniformly at random from the set of possible actions. There exists a deterministic task environment in which this agent is rational.

True

Uninformed Search Strategies

Uninformed search (blind search) strategies use only the information available in the problem definition

Can there be more than one agent program that implements a given agent function?

YES

Given a fixed machine architecture, does each agent program implement exactly one agent function?

YES, it implement exactly one agent function.

The environment type largely determines by

agent design

Time and space complexity are measured in terms of b, d, m

b: maximum branching factor of the search tree d: depth of the least-cost solution m: maximum depth of the state space (may be ∞)

how can count agent to be most successful?

by Performance measure: An objective criterion for success of an agent's behavior

Search Strategies terms: optimality

does it always find a least-cost solution?

Search Strategies terms: Completness

does it always find a solution if one exists? Always reach the goal state

An agent that senses only partial information about the state cannot be perfectly rational.

false

Breadth-First Search (BFS) on a graph pseudo code

function BREADTH-FIRST-SERARCH(problem) return a solution or failure node <- a node with STATE = problem.INITIAL-STATE, PATH COST =0 if problem.GOAL-TEST(node.STATE) then return SOLUTION(node) frontier <- a FIFO queue with node as the only element explored <- an empty set loop do if EMPTY?(frontier) then return failure node <- POP(frontier) /* choose the shallowest node in frontier */ add node.STATE to explored for each action in problem.ACTIONS(node.STATE) do child <-CHILD-NODE(problem, node, action) if child.STATE is not in explored or frontier then if problem.GOAL-TEST(child.STATE) then return SOLUTION (child) frontier <-INSERT(child, frontier)

General Graph Search Algorithm with memory, problem: bad memory complexity

function GRAPH-SEARCH(problem) return a solution, or failure initialize the frontier using the initial state of problem initialize the explored set to be empty loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution add the node to the explored set expand the chose node, adding the resulting nodes to the frontier only if not in the frontier or explored set

Simple Problem-Solving Agents

function SIMPLE-PROBLEM-SOLVING-AGENT(percept) returns an action persistent: seq, an action sequence, initially empty stat, some description of the current world state goal, a goal, initially null problem, a problem formulation state <-UPDATE-STATE(state, percept) if seq is empty then goal <-FORMULATE-GOAL(state) problem <- FORMULATE-PROBLEM(state, goal) seq <-SEARCH(problem) action <- FIRST(seq) seq <- REST(seq) return action

Simple Reflex Agent Pseduo-code

function SIMPLE-REFLEX-AGAENT(perception) returns an action static: rules, a set of condition rules state <- INTERPRET-INPUT (percept) rule <- RULE-MATCH(state, rules) action <- RULE-ACTON[rule] return action

General Tree Search Algorithm with no memory

function TREE-SEARCH(problem) return a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the chosen node, adding the resulting nodes to the frontier

General Tree Search Algorithm use what type of datasstructure

heshmap, stack (frontier)

Draw back of Table-Driven Agent

huge table take a long time to construct such a table no autonomy Even with learning, need a long time to learn the table entries

what is the Distinction between rationality and omniscience

it is expected performance vs. actual performance

what is Agent function

it maps any given percept sequence to an action [f: p* → A] mathematical treatment mapping percept to action.

what is Simple Reflex Agent

it use sensor to get the perception from environment to get what the world is like now then based on condition-action rules to chose what action I should do now. It further utilized actuator to respond to environment.

Search Strategies terms: space complexity:

maximum number of nodes in memory

Deterministic (vs. stochastic)

next state of the env. determined by current state and the agent's action If the environment is deterministic except for the actions of other agents, then the environment is strategic (strategic is a subset of deterministic)

Search Strategies terms: time complexity:

number of nodes generated

what is Percept

the agent's perceptual inputs

what is Percept sequence

the complete history of everything the agent has perceived

Where does it fit into the agents and environments discussion of the Simple Problem Solver?

■ Static environment ■ Observable ■ Discrete ■ Deterministic Control System: ■ Open-loop system: percepts are ignored, thus break the loop between agent and environment ■ Closed-Loop system: with each action a percept must be taken because the world may have changed


Conjuntos de estudio relacionados

BLK.11 Aircraft Fuel Instrument and Fire Protection

View Set

BIOL-1010-Z90 Chapter 2 Focused Reading Quiz

View Set