AAI_501 Midterm

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

A* search

avoids expanding paths that are already expensive uses an evaluation function f(n)=g(n)+h(n) -g(n): cost so far to reach n -h(n) estimated cost to goal from n -f(n):estimated total cost of path through n to goal

nondeterministic/stochastic

if the next state of the environment is NOT determined by the current state and action by the agents

Shopping for used AI books on the internet Environment: fully observable versus partially observable deterministic versus non deterministic/stochastic Episodic versus sequential Static versus dynamic Discrete versus continuous Agent: Single agent versus multi agent

partially observable, deterministic, sequential, static, discrete, single agent. (can be multiagent and dynamic if via auction. Can be dynamic if its over a long period of time and prices change)

Exploring the subsurface oceans of Titan Environment: fully observable versus partially observable deterministic versus non deterministic/stochastic Episodic versus sequential Static versus dynamic Discrete versus continuous Agent: Single agent versus multi agent

partially observable, stochastic, sequential, dynamic, continuous, single agent.

goal-based agent

-agent needs a goal in order to make a decision -the behavior of the agent can be easily changed to a different destination by specifying that the destination as goal - more flexible

Main principles of Hidden markov model

-comprises of hidden variables and observables. The state of the process is described by a single discrete random variable. -the observed events have no one-to-one correspondence with states but are linked to states through the probability distribution.

utility based model

-goals are alone not enough to generate high quality behavior -many aciton sequences willl get a taxi to a destination but how about cheaper, quicker, safer? these are defined as utility funtions. -utility fuinction is internalization of performance measure. -when there are conflicts(speed, safety), utility function provides the appropriate tradeoff

Plateau

A flat area of search landscape. It can be flat from which no uphill exits exists or a shoulder from which progress can occur

Local maxima

A peak that is higher than neighboring states but lower than global maxima.

simulated annealing

Allows moves to inferior solutions in order to not get stuck in a poor local optimum. • Dc = F(Snew) - F(Sold) F has to be minimized •Inferior solution accepted with a probability; Generate random number U: (0,1) where u<e^-(deltac/t) •As temperature decrease, probability of worse moves decrease (t is changed with a cooling schedule)

fully observable

An agent's sensors give it access to the complete state of the environment at each point in time. A task environment is effectively fully observable if the sensors detect all aspects that are relevant to the choice of action.

dynamic

If the environment can change while an agent is deliberating

deterministic

If the next state of the environment is completely determined by the current state and the action executed by the agent(s)

Provide a complete problem formulation for the following. Problem formation should include: initial state, goal test, cost function. They can all be a few words or sentence for each. You have a program that outputs the message "illegal input record" when fed a certain file of input records. You know that processing of each record is independent of the other records. You want to discover what record is illegal.

Initial state: considering all input records goal state: considering a single record and it gives illegal input message cost function: number of runs

Provide a complete problem formulation for the following. Problem formation should include: initial state, goal test, cost function. They can all be a few words or sentence for each. Using only four colors, you have to color a planar map in a way such that no two adjacent regions have same color

Initial state: no regions colored goal state: all regions colored and no two adjacent regions have the same color cost function: number of assignments

Your goal is to navigate a robot out of a maze. The robot starts in the center of the maze facing north. You can turn the robot to face north, east, south, or west. You can direct the robot to move forward a certain distance although it will stop after hitting a wall. a) Formulate this problem. This means you will have to describe initial state, goal test, successor function, and cost function. Successor function is a description of the robots successive actions after the initial state. We'll define the coordinate system so that the center of the maze is at (0, 0), and the maze itself is a square from (−1,−1) to (1, 1)

Initial state: robot at coordinate (0,0) facing north goal state: either x>1 or y>1 where (x,y) is the current location successor function: move forward any distance d; change the direction robot is facing cost function: total distance moved

Depth First Search

Visits the child vertices before visiting the sibling vertices A stack is usually implemented

breadth first search

Visits the neighbor vertices before visiting the child vertices A queue is usually implemented

particle filters

a tracking problem. the goal is to keep track of the current location of the system given noisy observations. In order to estimate the state, two sources of information is required. A process model and a measurement model. This algorithm is an iterative process, that is repeated for a certain number of samples. Each sample is weighted by the likelihood it assigns to the new evidence. The probability that a sample is selected is proportional to its weight. It is a Bayesian filter which means estimation is preformed using Bayesian theory. The main idea is updating or pushing beliefs through transitions.

how to find metric in uniform cost search

add the weights of the edges of the graph

what is the frontier

all unexpanded nodes (nodes with no children)

rational agent

chooses whichever action maximizes the expected value of the performance measure given the percept sequence to date

Playing a tennis match Environment: fully observable versus partially observable deterministic versus non deterministic/stochastic Episodic versus sequential Static versus dynamic Discrete versus continuous Agent: Single agent versus multi agent

fully observable, stochastic, episodic, dynamic, continuous, multiagent.

Bidding on an item at an auction Environment: fully observable versus partially observable deterministic versus non deterministic/stochastic Episodic versus sequential Static versus dynamic Discrete versus continuous Agent: Single agent versus multi agent

fully observable, stochastic, sequential, static, discrete, multiagent.

static

if the environment can not change while an agent is deliberating

discrete

if an environment as a finite number of states(chess)

continuous

if an environment as an infinite number of states(driving a car)

single-agent

just one agent (like someone solving a crossword puzzle)

multi-agent

more than one agent (like two people playing chess)

Partially Observable

noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data

What is an admissible heuristic?

one that never overestimates the cost to the goal the estimated cost must always be lower than or equal to the actual cost of reaching the goal state.

simple reflex agent

takes actions based on the current percept ignoring the rest of the percept history.

episodic

the agent's experience is divided into atomic episodes. In each episode the agent receives a percept and then performs a single action. Crucially, the next episode does not depend on the actions taken in previous episodes

sequential

the current decision could affect all future decisions.

Main concepts of first order Markov models

the current state only depends on the previous state. this can be used to simplifier the conditional probability

•Which node will be expanded next in the frontier if uniform cost search (Dijkstra's) is used?

the one with the least cost

How does Dijkstra's algorithm work

•Dijkstra's Algorithm finds the shortest path between a given node (which is called the "source node") and all other nodes in a graph. •This algorithm uses the weights of the edges to find the path that minimizes the total distance (weight) between the source node and all other nodes.

Genetic Algorithms

•Initialize population •Selection •Crossover and mutation •Replacement •Repeat until number of generations or best fit solution

Local Search

•Local search algorithms can solve optimization problems •If the elevation corresponds to an objective function, the goal is to find global maximum or global minimum (gradient descent)

tabu search

•Move to best available neighborhood solution •Maintain tabu list (moves to avoid) maintains a list of solution points that must be avoided. this is called the tabu list which is updated based on memory. certain moves are removed after a time period. the algo allows for exceptions from the list aspiration criteria. it also expands the search area and allows modifying size of list.

A star search

•Uses a combination of two functions •g(n) =cost so far to reach node n •h(n) = heuristic (estimated cost to reach destination from n) •f(n)=g(n)+h(n) ; Expand node with lowest f(n) Greedy search Expands the node with the lowest h(n) - node that appears closest to goal.


Set pelajaran terkait

Nervous System Study Guide Plus Vocab and notes

View Set

SY0-401: Glossary, GSEC, SEC + 401 Study Guide COMBINED

View Set

Kin 236 Exam #2 Learning Objectives and Study Guide

View Set

PNC 1- Exam 3: Collaboration, Leadership, and Health Promotion

View Set

Molecular Bio Test 2: Multiple Choice Questions:

View Set

LUOA World History II - Module 4: Absolutism, Reason, & Revolution

View Set

Completing the Application, Underwriting, and Delivery Policy

View Set