AI Exam Questions

Ace your homework & exams now with Quizwiz!

agent function

a mathematical way of describing an agent's behavior that maps any given percept sequence to an action

goal

information that describes situations that are desirable

What is the hardest case?

partially observable, multiagent, stochastic, sequential, dynamic, continuous, and unknown

What does an agent's choice of action depend on?

the entire percept sequence observed to date, but not on anything it hasn't perceived.

atomic vs factored representation

Factored representation more easily represents how to reach from one state to another, and also represent uncertainty through the sudden absence of a variable/attribute

Is randomization better in single or multiagent environments?

Multiagent, to avoid predictability in competitive environments.

When does randomizing actions help an agent?

When the agent is stuck in an infinite loop

Is it possible for an environment to be unknown but fully observable?

Yes, as video games can give you the full state of things... but a new player might not know the controls.

Is it possible for an environment to be known but be partially observable?

Yes, such as in games of chance, the rules are known, but not the outcome of obscured cards.

simple reflex agent

agents that select actions on the basis of the current percept and nothing more and uses condition-actions rules

learning agent

an agent that can operate in initially unknown environments and become more competent than its initial knowledge alone might allow

rational agent

an agent that selects an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has

model-based agent

an agent that uses models

percepts

an agent's perceptual inputs at any given instant

nondeterministic environment

an environment in which actions are characterized by their possible outcomes, but the probability of them happening is not taken into consideration.

uncertain environment

an environment that is not fully observable or not deterministic

performance measure

an evaluation of any given sequence of environment states

agent

anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators

ways to represent state

atomic, factored or structured

condition-action rule

basically an if -> then statement

single agent vs multiagent

broad and shallow description of if there are multiple agents performing in the environment or not

utility

comparison of different world states according to exactly how "happy" they would make the agent

critic

conceptual component of learning agent responsible for giving feedback to the learning element (with respect to a fixed performance standard) on how the agent is doing and determines how the performance element should be modified (by the learning element) to do better

learning element

conceptual component of learning agent responsible for making improvements

performance agent

conceptual component of learning agent responsible for selecting external actions - basically an agent without the ability to get better

problem generator

conceptual component of learning agent responsible for suggesting actions that will lead to now and informative experiences i.e. what causes the agent to explore and take risks

competitive environment

each agent in the environment is trying to maximize their performance measure which means minimizing other agents' performance measure if possible

goal vs utility

goals tend to be binary, which utility (happiness) can be based on a scale

actuator

how an agent interacts with its environment

sensor

how an agent perceives its environment

internal state

how an agent would depend on its percept history to reflect on at least some of the unobserved aspects of the current state

agent program

how the agent goes through mapping out and choosing an action

semidynamic environment

is the environment does not chance with the passage of time, but an agent's performance measure score does

atomic representation of state

it's a black box - each state of the world is indivisible with no internal structure

fully observable vs partial observable environment

knowing the complete state of the environment (as relevant to actions satisfying the performance measure) via an agent's sensors or not

model (of the world)

knowledge of how the environment works (What rules the agent knows about the environment)

conceptual components of a learning agent

learning element, performance element, critic, problem generator

autonomy

not relying on the prior knowledge of one designer and rather on ones own percepts

PEAS

performance, environment, actuators, sensors

discrete vs. continuous environment

referring to how these words are applied to describe the state of the environment, how time is handled and to the precepts and actions of the agent All three aspects can be one or the other.

example of atomic representation of state

search and game playing algorithms

factored representation of state

splits up each state into a fixed set of variables or attributes which have some value - these variables can be shared between states

what does a nondeterministic environment normally have the agent do?

succeed for all possible outcomes of its actions

task environment

the PEAS

agent program vs agent function

the agent program is a concrete implementation (something actually running in the agent) while the agent function is an abstract mathematical description

percept sequence

the complete history of everything the agent has ever perceived

architecture

the computing device (which physical sensors and actuators) that the agent program runs on

evironment

the situation a agent is in

cooperative environment

where all agents take actions to improve everyone's performance measures

episodic vs sequential environments

whether an agent's experience is divided into atomic episodes in which the agent receives a percept and performs a single action and each episode does not depend on the actions taken in previous episodes i.e. decisions/episodes do not effect anything/have long-standing consequences

known vs unknown environment

whether or not the agent is aware of the rules that define the environment (i.e. the laws of physics governing said environment).

deterministic vs stochastic

whether the entire state of the environment is the result of an agent's actions on the environment or not

dynamic or static environments

whether the environment can change while an agent is deliberating or not


Related study sets

Los presagios antes de los españoles y el descubrimiento de América

View Set

MAN 4720, Chapter 1, Exam 1 Review

View Set

CSC Healthcare Informatics Exam #2

View Set

Mental Health practice Questions

View Set

Tough, though, thought, through, thorough and throughout

View Set

Geography Trails (South America)

View Set

Maternity and Women's Health Nursing - Women's Health

View Set