CSE 545 - Agents
A - Playing Soccer
Legs, Head, Upper Body
Agent Function
A mapping from input-sequences to actions defining the beviour of an agent
Reflex Agent
Agent only capable of considering it's current perception of the world.
Model-based Agent
Agent that attempt to internalize aspects of the world through an approximating model.
Goal-based Agent
Agent whose performance measure does not directly depend onl ocal actions on some (potentially) distant goal.
Utility-based Agent
Agent whose performance measure is given by a utility function which determines which states are preferable and whic are not on a continuous or many-valued scale.
Learning Agent
An agent whose performance can improve with experience.
Agent
An algorithmic entity capable of displaying intelligent-like behaviour.
S - Playing Soccer
Eyes, Ears
Every agent is rational in an unobservable environment.
False. Built-in knowledge can give a rational agent in an unobservable environment. A vacuum-agent that cleans, moves, cleans moves would be rational but one that never moves would not be.
Every agent function is implementable by some program/machine combination.
False. Consider an agent whose only action is to return an integer, and who perceives a bit each turn. It gains a point of performance if the integer return matches the value of the entire bit string perceived so far. Eventually, any agent program will fail because it will run out of memory.
Imagine the next Mars rover stops working upon arrival to Mars. From this we can deduce that Mars-river is not a rational agent.
False. It might have been hit by a meteor and it could still be optimal in an average sense.
A chess playing agent operates in an episodic task environment.
False. Sequential. The past affects the future
The input to an agent program is the same as the input to the agent function.
False. The input to an agent is the percept history. The input to an agent is only the current percept; it is up to the agent program to record any relevant history needed to make actions.
An agent that sense only partial information about the state cannot be perfectly rational.
False. The vacuum-cleaning agent is ration but doesn't observe the state of the square adjacent to it.
Overview - Playing Soccer
Partially observable, multi-agent, stochastic, sequential, dynamic, continuous, unknown
Agent Program
Physical program implementing or approximating an agent function.
E - Playing Soccer
Soccer Field
Rationality
The behavior maximizing one's own reward or performance
There exist task environments in which no pure reflex agent can behave rationally.
True. Anything where memory is required to do well will thwart a reflex agent.
There exists a task environment in which every agent is rational.
True. Consider a task environment in which all actions (including no action) give the same, equal reward.
Suppose an agent selects its action uniformly at random from the set of possible actions. There exists a deterministic task environment in which this agent is rational.
True. Consider the "all actions always give equal reward" case.
It is possible for a given agent to be perfectly ration in two distinct task environments.
True. Consider two environments based on betting on the outcome of the roll of two dice. In one environment, the dice are fair, in the other, the dice are biased to always give 3 and 4. The agent can bet on what the sum of the dice will be, with equal reward on all possible outcomes for guessing correctly. The agent that always bets on 7 will be rational in both cases.
A chess-playing agent operates in a strategic task environment.
True. Uncertainty due to opponent.
P - Playing Soccer
Win/Lose
Are there agent functions that cannot be implemented by any agent program?
Yes
Given a fixed machine architecture, does each agent program implement exactly one agent function?
Yes. Given a percept sequence, an agent program will select an action. To implement multiple agent functions this would require the agent program to select different actions (or different distributions of actions) given the same percept sequence.
Can there be more than one agent program that implements a given function?
Yes. assume we are given an agent function whose actions only depend on the previous p percepts. One program remember the previous p percepts to implement the agent function, while another could remember greater than p percepts and still implement the same agent function.