CS 440 midterm

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Define rationality

-A rational agent acts to optimally achieve its goals -Being rational means maximizing your (expected) utility - only concerns the decisions/actions that are made, not the cognitive process behind them

Environment characteristics: known vs unknown

-are the rules of the environment (transition model and rewards associated with states) known to the agent? -strictly speaking, not a property of the environment, but of the agent's state of knowledge

what is A* search?

-avoid expanding paths that are already expensive -the evaluation function f(n) is the estimated total cost of the path through node n to the goal: --f(n) = g(n) +h(n) --g(n) cost so far to reach n (path cost) --h(n) estimated cost from n to goal (heuristic) if h(n) is admissible, A* is optimal -only without repeated state detection

What is informed search?

-give the algorithm hints about the desirability of different states -use an evaluation funciton to rank nodes and select the most promisiing one to expand

what is Weighted A* search?

-speed up search at the expense of optimal -take an admissible heuristic, inflate it by a scalar alpha > 1, and then perform A* search as usual -fewer nodes tend to get expanded, but the solution may be sub-optimal (cost will be at most alpha times the cost of the optimal solution)

CSP: local search

-start with complete states (all variables assigned) -allow states with unsatisfied contraints -try to imporve states by reassigning variable values -hill-climbing search --each iteration, randomly select conflicted variable and choose a new value that violates fewest constraints --attempt to greedily minimize total number of violated constraints --problem: local minima

What is the Turing Test?

-tests a machine's ability to exhibit intelligent behavior equaivalent to a humn -human evvaluator would judge natural langue convos between a human and machine

what is the state space?

-the initial state, actions, and transition model define the state space of the problem --the set of all states reachable from initial state by any sequence of actions --can be represented as a directed graph where the nodes are states and the links between nodes are actions

tree search: Uniform-cost search

-uninformed -for each frontier node, save total cost of the path from initial state to that node -expand the frontier node with the lowest path cost -implementation - frontier is a priority queue ordered by path cost -equivalent to BFS if step costs all equal equivalent to Dijsktras algo in general -Complete? yes, if step cost is greater than some positive constant (sigma) -Optimal? yes, nodes expanded in increasing order of path cost -Time? number of nodes with path cause <= cost of optimal solution(C*) O(b^C*/sigma) can be bigger than O(b^d) since search can explore long paths consisting of small steps before exploring shorter paths consisting of larger steps -Space? O(b^C*/sigma)

tree search: DFS

-uninformed search strategy -expand deepest unexpanded node -implementation: frontier is a LIFO stack -Complete? no -Optimal? no -Time? O(b^m) m is max length of any path in the state space -Space? O(b^m)

tree search: BFS

-uninformed search strategy -expand shallowest unexpanded node -implementation: frontier is a FIFO queue -Complete? yes if b is finite -Optimal? yes if cost = 1 per step Time? number of nodes in a b-ary tree of depth d , O(b^D) Space? O(b^d) space is the bigger problem with bfs, compared to dfs when steps have different costs, BFS finds the path with fewest steps but not always the cheapest path

tree search: Iterative deepening search

-uninformed search strategy -use DFS as a subroutine --check the root --do DFS for a path of length 1 i--f there is no path of length 1, do a DFS for a path of length 2 --if there is no path of length 2, do a DFS for a path of length 3 -Complete? yes -Optimal? yes if step cost = 1 -Time? (d+1)b^0+db^1+(d-1)b^2+..+b^d. -Space? O(b^d)

What is the distinction between a world state and a search tree node?

A world state is a description or "snapshot" of the world. A search tree node is part of the search tree data structure. It contains a world state along with other information (heuristic function value, evaluation function value, parent pointer, etc.). In general, it is possible for multiple tree nodes to contain the same world state.

PEAS: What is A?

Actuators - how an agent changes the environment according to transition model...results in successor state EX: for chess AI, the actuator moves the pieces

What is the difference between admissible and consistent heuristics?

An admissible heuristic never overestimates the cost to reach the goal; while a consistent heuristic is slightly stronger. For a consistent heuristic, for every node n and every successor n' generated by any action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n' plus the estimated cost of reaching the goal from n'. A* tree search (i.e., search with no repeated state detection) with an admissible heuristic is optimal; A* graph search (i.e., search with repeated state detection) requires a consistent heuristic to be optimal.

Give an example of a coordination game and an anti-coordination game. For each game, write down its payoff matrix, list dominant strategies and pure strategy Nash equilibria (if any).

An example of a coordination game covered in class is Stag Hunt. There is no dominant strategy; Nash equilibria are (Stag,Stag) and (Hare,Hare). An example of an anti-coordination game is Game of Chicken. There is no dominant strategy; equilibria are (Straight,Chicken) and (Chicken,Straight).

Discuss the relative strengths and weaknesses of BFS for AI problems.

BFS + Complete if branching factor is finite. + Optimal if step costs are constant. - Time and space complexity are exponential in length of shortest solution path (space complexity is the bigger problem)

Explain why it is a good heuristic to choose the variable that is most constrained but the value that is least constraining in a CSP search.

By choosing the variable that is most constrained we are minimizing the branching factor of backtracking search. We are also selecting the variable that is most likely to cause a failure soon. This helps us prune our search tree by avoiding pointless searches through other variables when one fails. By choosing the value that is least constraining we are maximizing the possible options for neighboring variables and giving our search more flexibility and maximizing the possibility of finding a solution.

Discuss the relative strengths and weaknesses of DFS for AI problems

DFS + May be able to find goal faster than BFS if there are many solution paths + Space complexity is linear in the length of solution path (assuming no repeated state detection) - Not optimal - Not complete in infinite-depth spaces, spaces with loops (it is complete in finite spaces, but should modify to avoid repeated states along path) - Running time can be much worse than for BFS if the maximum path length is much larger than the optimal solution length

PEAS: What is E?

Environment - a formal representation of world states -a tuple.. (var1 = val1, var2=val2,...,varn-valn)

What is the proper procedure for avoiding repeated states during tree search?

Every time you expand a node, add that state to the explored set; do not put explored states on the frontier again. Every time you add a node to the frontier, check whether it already exists in the frontier with a higher path cost, and if yes, replace that node with the new one.

what is the space for A*?

Exponential

CSP: what is Forward checking?

Forward checking propagates information from assigned to unassigned variables, but doesn't provide early detection all failures

In the tree search formulation, why do we restrict step costs to be non-negative?

If there is a loop with a net negative cost in the state space, a search algorithm can keep going around this loop infinitely and lowering its cost every time.

What is local search for CSPs? For which kinds of CSPs might local search be better than backtracking search? What about the other way around?

Local search for CSPs is to assign a value to every variable at the initial state, and the search changes the value of one variable at a time. Local search (i.e., hill climbing) may be a good choice when the problem is relatively loosely constrained and there are many possible solutions. An example of such a problem is n-queens. Backtracking search may be better for more tightly constrained problems with few possible solutions, such as sudoku. In such problems, it is also difficult to come up with local modifications that can remove constraint violations.

Name a game for which state-of-the-art AI systems currently outplay the best humans, and another one for which they are worse than the top humans. What accounts for the relative "difficulty" of these games for AI?

Most recent state-of-the-art supercomputers are able to beat grandmasters in Chess. However, the best humans are still able to beat computers in Poker. One of the potential reasons that computers are not as good as humans is because of bluffing. Humans are better able to pretend they have good hands when they do not; there is an added psychological element that computers have difficulty replicating. Go is also difficult for computers because of the extremely large branching factor.

Planning: interleaved vs non-interleaved panners

Non-interleaved planners: consider subgoals in sequence and try to satisfy them one at a time. Potential problem for non-interleaved planners: If you try to satisfy subgoal X and then subgoal Y, X might undo some preconditions for Y, or Y might undo some effects of X Interleaved planners: consider multiple subgoals together. (I do not find a very precise definition for interleaved planners, but they are planners that avoid problems described above.)

what is the time for A*?

Number of nodes for which f(n) ≤ C* (exponential)

Planning: complexity of planning

PSPACE-complete the length of a plan can be exponential in number of objects in the problem, so is game search QBF - quantified boolean formula QBF compared to SAT - relationship between SAT and QBF is akin to the relationship between puzzles and games ex: tower of Hanoi

PEAS: What is P?

Performance measure - a function the agent maximizes or minimizes (assume given)

PEAS: What is S?

Sensors - how an agent observes the environment EX: for chess AI, sensor reads the chess board

CSP: what is arc consistency?

Simplest form of propagation makes each pair of variables consistent: --X->Y is consistent iff for every value of X there is some allwoed value of Y --when checking X->, throw out values of X which there isn't an allowed value of Y -arc consistency detects failure earlier than forward checking -can be run before or after each assignment

Game theory: Prisoner's Dilemma

Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge. They hope to get both sentenced to a year in prison on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain. Each prisoner is given the opportunity either to: betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The offer is: If A and B each betray the other, each of them serves 2 years in prison If A betrays B but B remains silent, A will be set free and B will serve 3 years in prison (and vice versa) If A and B both remain silent, both of them will only serve 1 year in prison (on the lesser charge)

How can randomness be incorporated into a game tree? How about partial observability (imperfect information)?

We need to add a chance node for every random event in the game, for example, a dice throw or random cards being dealt. The children of the chance node correspond to all possible outcomes, and each outcome also has a probability associated with it. Partial observability gives rise to states being grouped into information sets for each player. An information set consists of all states that look the same from the viewpoint of one of the players.

Can an environment be both known and unobservable? Give example.

Yes. EX: Russian roulette. Environment is known (each agent continues on until the other agent is terminated from the game).. But environment, specifically the revolver state, is unobservable from agents (they don't know where the bullet is).

Game theory: what is the Nash equilibrium?

a pair of strategies such that no player can get a bigger payoff by switching strategies, provided the other players sticks with the same strategy

Game theory: what is the Nash equilibrium for the mixed strategy?

a player chooses between the moves according to a probability distribution

CSP: what is Backtracking search?

a recursive algorithm. It maintains a partial assignment of the variables. Initially, all variables are unassigned. At each step, a variable is chosen, and all possible values are assigned to it in turn. For each value, the consistency of the partial assignment with the constraints is checked; in case of consistency, a recursive call is performed. When all values have been tried, the algorithm backtracks

Game theory: what is a dominant strategy?

a strategy whose outcome is better for the player regardless of the strategy chosen by the other player

Game theory: what is mechanism design?

assuming agents pick rational strategies, how should we design the game? -auctions -state pollution example

Games: partial observability

battleships, scrabble, poker, bridge

Games: evaluation function

can be thought of as the probability of winning from a givens tate or the expected value of that state ex: for chess, weights can be the material value (pawn is 1 queen is 10) and the features can be the adcantage in terms of that piece

Games: Horizon effect

can only see outcome after fixed number of plies, can't see beyond 'horizon' something bad can happen / be caused in future but we can't be aware of it

Games: minimax strategy

choose move that gives you ebst worst-case payoff max = /_\ triangle __ min = \ /

CSP: SAT problem, NP-completeness

computation complexity of CSPS reduces to SAT SAT is NP-complete, others are graph coloring, n-puzzle, generalize sudoku

Game theory: Stag Hunt

coordination game hare is worth less than a stage

what is the transition model?

defines what state results from performing a given action in a given state

Environment characteristics: episodic vs. sequential

does each problem's instance involve just one action or a series of actions that change the world state according to the transition model episodic = word jumble solver sequential = chess, scrabble, autonomous driving

tree search: What does optimal mean?

does it always find a least-cost solution?

tree search: What does completeness mean?

does it always find a solution if one exists?

Environment characteristics: discrete vs. continuous

does the environment provide a fixed number of distinct percepts, actions, and environment states? discrete = word jumble, chess with clock, scrabble continuous = autonomous driving

what is a heuristic function - h(n)?

estimated the cost of reaching goal from node n

Environment characteristics: fully vs. partially observable

for any state, are all the values for all the variables known to the agent? - total access to complete state of the environment fully = word jumble solver, chess partially = scrabble, autonomous driving

Expectiminimax

for stochastic games, for chance nodes, sum values of successor states weighted by the probablity of each successor

Games: Quiescence search

horizon effect solution dont cut off search at positions that are unstable /un-"quiet", like if you;re about to lose an important piece

Search problem formulation: what are the components?

initial state, actions, transition model, goal state, path cost, state space

tree search : Describe basic strategy and handling repeated states

initialize the frontier using the starting state while frontier is not empty: -choose a frontier node according to search strategy and take it off the frontier -if the node contains the goal state, return the solution -else expand the node and add its children to the frontier to avoid repeated states: --mark expanded nodes as explored, and each time you add a node to the frontier check whether its already been explored

Environment characteristics: single-agent vs multiagent

is an agent operating by itself in the environment? single = world jumble multi = chess, scrabble, autonomous driving

Environment characteristics: deterministic vs. stochastic

is the transition model deterministic (unique successor state given the current state and action) or stochastic (distribution over successor states given current state and action)? deterministic = word jumble solver stochastic = scarbble, autonomous driving strategic = chess

Environment characteristics: static vs. dynamic

is the world changing while the agent is thinking? static = word jumble solver, scarbble dynamic = autonomous driving Semidynamic = chess with clock

Game theory: what is Pareto optimality?

it is impossible to make one of the players better off without making another one worse off

Game theory: what is an anti-coordination game?

it is mutually beneficial for the two players to choose different strategies EX: chicken

what is a consistent heuristic?

its admissible, but stronger because it never overestimates between each step too

tree search: What does space complexity mean?

max number of nodes in memory b = max branching factor of the search tree d = depth of the optimal solution m = max length of any path in the state space (may be infinite)

what is an admissible heuristic?

never overestimats the cost to reach the goal, it is optimisitc

tree search: is DFS complete?

no

tree search: is DFS optimal?

no

tree search: What does time complexity mean?

number of nodes generated

What is C*?

number of nodes with path cost <= cost of optimal solution

Games: Stochastic games

perfect information (fully observable) = backgammon, monopoly imperfect info (partially observable) = scrabble, poker, bridge

Games: deterministic games

perfect information (fully observable) = chess, checkers, go imperfect info (partially observable) = battleships

Planning: situation space vs plan space planners

situation space: each node in the search space represents a world state, arcs are actions in the world --plans = sequences of actions from start to finish --must be totally ordered plan space: nodes are incomplete plans, arcs are transformations between plans --actions in the plan may be partially ordered --principle of least commitment = avoid ordering plan steps unless absolutely necessary

search tree: what is nodes vs states?

state = a representation of the world node = data structure that is part of the search tree --nodes have to keep pointer to parent, path cost, and other info

Games: alpha-beta pruning

stop expanding node if you find a utility thats lower then the current best worst case outcome

Games: zero-sum games

sum of both players is a constant (0)

Game theory: what is normal form representation?

that table thing, payoff matrix 0,0 | 1,-1 | -1,1 -1,1 | 0,0 | 1,-1 1,-1 | -1,1 |0,0

Environment characteristics: what is semi-dynamic?

the environment does not change with the passage of time, but the agent's performance score does Semidynamic = chess with clock

tree search? what is the time and space for Iterative deepening search?

time : (d+1)b^0+db^1+(d-1)b^2+..+b^d. space: O(b^d)

tree search? what is the time and space for BFS?

time :number of nodes in a b-ary tree of depth d , O(b^D) space: O(b^d) space is the bigger problem with bfs, compared to dfs

tree search? what is the time and space for DFS?

time: O(b^m) m is max length of any path in the state space space: O(b^m)

Game theory: Game of Chicken

two cars running at each other, choose straight or chicken anti-coordination

what is uninformed search strategy?

use only the information available in the problem definition

Define utility and expected utility

utility function = objective criterion for succes of an agent's behavior expected utility: EU(action+ sum(P(outcome|action)U(outcome)) for all outcomes

Environment characteristics: what is strategic?

when the environment is deterministic except for the actions of other agents strategic = chess

Games: Monte Carlo tree search

when you get to a chance node, simulate a large number of games with random dice rolls and use win percentage as evaluation function, can work for backgammon

is A* optimal?

yes

tree search: Is Iterative deepening search complete?

yes

tree search: is BFS complete?

yes if b is finite

tree search: is BFS optimal?

yes if cost = 1 per step

tree search: Is Iterative deepening search optimal?

yes if step cost = 1

is A* complete?

yes unless there are infinitely many nodes with f(n) <= C*

CSP: what is constraint propagation?

• Constraint propaga/on repeatedly enforces constraints locally

what is dominance in terms of heuristics?

• If h1 and h2 are both admissible heuristics and h2(n) ≥h1(n) for all n, (both admissible) then h2 dominates h1


Set pelajaran terkait

CompTIA Security+ SY0-501 Part 1 (316 Questions)

View Set

Finc 311 Practice Problems Multiple Choice

View Set

Chapter 14: Blood, Chapter 15: Cardiovascular System, Chapter 16: Lymphatic System and Immunity

View Set

Lower Extremity Evaluation Final Exam Review

View Set