Gervigreind

Ace your homework & exams now with Quizwiz!

Consider the following probability distribution given as a table of joint probabilities (e.g., the entry in the upper left corner is P(Toothache ∧ Catch ∧ Cavity)): toothache ¬toothache cavity ¬cavity cavity ¬cavity cavity: 0.108 0.012 0.072 0.008 ¬cavity: 0.016 0.064 0.144 0.576 What is P(Toothache) (accurate to 3 decimal places)?

0.2

In general, how can you construct an admissible heuristics for any given problem?

By relaxing the problem (that is removing constraints on the allowed actions or on the goal condition) and finding an optimal solution to the relaxed problem.

Consider the following problem: Three cannibals and three missionaries must cross a river. Their boat can only hold two people. The cannibals must never outnumber the missionaries on either side of the river. Every person, whether missionary or cannibal, can row the boat. - What could explain that the state space and search tree have different sizes?

Many states can be reached by multiple sequences of actions, e.g., because we can move the same people back and forth and thus go back to states we already visited. Each node in the tree corresponds to one such path, thus, there will be several nodes in the search tree for many of the states in the state space. (10 points for "several nodes in the tree per state", 5 points for "because there are several ways to reach the same state", or some equivalent explanation)

What happens if Minimax is used in a game that is not turn-taking?

Minimax assumes sequential moves and fails in simultaneous-move scenarios, like rock-paper-scissors, where it would overestimate the second player's advantage, assuming perfect counter-play to the first player's move.

Why is Minimax suitable only for two-player, turn-taking, zero-sum games?

Minimax operates under the assumption that one player maximizes their score while the opponent minimizes it. This mechanism works perfectly in zero-sum games where each player's gain or loss is exactly balanced by the losses or gains of the other player.

What is the Fhourstones Benchmark?

The Fhourstones Benchmark measures CPU performance using an optimized alphabeta search with a transposition table in Connect-4 to search the complete game tree and determine the value of the initial position.

Explain Expansion in MCTS

The MCTS phase where a new node is added to the search tree to explore a new move that hasn't been played yet.

Explain Selection in MCTS

The MCTS phase where the algorithm chooses which node to explore next based on past performance and potential for improvement.

Explain Simulation(Playout) in MCTS

The MCTS phase where the game is played out randomly from the current state to a terminal state to get an outcome.

Explain Backpropagation in MCTS

The MCTS phase where the results of the simulation are used to update the information of the nodes and actions in the tree.

How many states does the Fhourstones algorithm generate per second, and what is the total number of states required to complete the search?

The algorithm can generate 10 million states per second and requires approximately 1.5 billion states in total to complete the search, taking about 2.5 minutes.

Is it worthwhile to add a heuristic that reduces the state generation speed but improves move ordering in the Fhourstones Benchmark?

Yes, adding a heuristic would halve the generation speed to 5 million states per second but would also reduce the depth by half due to almost perfect pruning. This results in a significant reduction in runtime to about 0.3 seconds, making it highly beneficial.

Suppose an agent has the belief that P(A)=0.4 and P(B)=0.3. Which interval of values are rational beliefs for P(A∨B)? Hint: Think about what P(A∨B) must be at least and can be at most given the information you have. It might be helpful to draw a Venn diagram

[0.4, 0.7]

A leaf node decision tree represents ...

a class

Which of the following statements are true about admissible heuristics? a. The cost of the optimal solution to a relaxed version of the problem yields and admissible heuristic for the problem. b.An admissible heuristic results in less search effort compared to a no-admissible one. c. The function evaluating all remaining costs as 0 (h(n) = 0) is an admissible heuristic. d. An admissible heuristic never overestimates the remaining cost. e. For two admissible heuristics h1 and h2, h1 is better than h2 if it returns lower values than h2 in every state. f. A heuristic is admissible if there is no copyright on it. g. Best-First search with an admissible heuristic is complete and optimal if we do not discard revisited states.

a, c, d and g

Which conditions are necessary for Minimax to be guaranteed to return optimal moves? a. The game is zero-sum (scores of the players always add up to zero). b. The game is played on a 2-dimensional board. c. The search completely expands the game tree (leaf nodes are terminal states). d. The game is fully-observable. e. There are only two-players in the game. f. The game is turn-taking.

a, c, d, e and f

Is breadth-first search guaranteed to be complete?

always

Supervised learning differs from unsupervised learning in that supervised learning requires ...

at least one output attribute

Which of the following properties must an environment have such that you can use a graph search algorithm to find a solution in the form of a sequence of actions? a. competitive b. deterministic c. episodic d. discrete e. sequential f. dynamic g. single-agent h. fully observable

b, d, g and h

Which of the following statements are true about A* Search? a. A* search with some specific heuristic is the same as Depth-First Search. b. A* search with a more accurate heuristic generally uses less time than with a worse heuristic. c. A* search with some specific heuristic is the same as Breadth-First Search. d. A* search with a more accurate heuristic generally expands fewer nodes than with a worse heuristic. e. A* search with some specific heuristic is the same as Uniform Cost Search. f. A* search has always better space and time complexity than Breadth-First Search.

c, d and e

Assume a constraint satisfaction problem with - n of variables - d possible values for each variable - c constraints The branching factor of the search tree for solving the CSP is ...

d

Regression differs from classification in that ...

we learn a function with numeric output.

Completeness of a search algorithm means that the search

will find a solution, if there is one

Episodic Environment

An environment divided into separate and distinct episodes where the agent's experience is segmented. Each episode is independent, with no carryover of state or information between them.

Describe what an Agent is in your own words

Anything that interact with its environment through sensors and actuators

Constraints in CSP for Crossword Puzzles

Constraints include that sequences of blank square variables must form words from the list and intersecting word variables must have matching characters at the crossing point. Optionally, each word must be unique within the puzzle.

Problems with DFS

DFS Problems: The search space contains cycles, creating infinite paths, making DFS non-complete and non-optimal. It might find longer solutions first. With detection and discarding of repeated states, DFS becomes complete but remains non-optimal.

What is Depth-First Search (DFS)?

Depth-First Search (DFS) explores a graph by moving as deep as possible from the starting point before backtracking. It uses a stack to keep track of nodes, prioritizing depth over breadth. DFS is useful for exploring all possible paths and checking for cycles in a graph.

Given the use of a transposition table, what is the effective branching factor of the tree in the Fhourstones Benchmark?

The effective branching factor is calculated to be approximately 1.95, based on a tree depth of 31.5 and the necessity to generate about 1.5 billion states.

For every fully observable, turn-taking, zero-sum two-player game there is an optimal pure strategy for each player. (A pure strategy is a deterministic function that returns a legal move for a player given a state of the game.)

True

When do you need an admissible heuristics and when a consistent heuristics?

You need an admissible heuristics if you want A* search without discarding repeated states to find the optimal solution. If repeated states are discarded then the heuristics also need to be consistent for A* search to be optimal.

Given P(A)=0.5 and P(B)=0.7, which interval of values is a rational belief for P(A∧B) ? Hint: Think about what P(A∧B) must be at least and can be at most given the information you have. It might be helpful to draw a Venn diagram

[0.2, 0.5]

An internal node in a decision tree represents ...

a test condition

Backtracking search is most similar to which search algorithm?

depth-first search

Assume a constraint satisfaction problem (CSP) with - n of variables - d possible values for each variable - c constraints A solution of the CSP can be found in which depth of the search tree (number of steps from the root)?

exactly n

Which of the following cases indicates overfitting?

low training error, high test error

Consider the task of a robot playing a game of football (soccer) in a team with humans. - Characterize the environment in terms of the properties (Agents, slide 13 / book, page 42-44): • fully vs. partially observable • deterministic vs. stochastic • episodic vs. sequential • static vs. dynamic • discrete vs. continuous • single vs. multi-agent

partially observable because some things may happen too fast, too far away or out of your field of vision. stochastic because you can not perfectly predict the outcome of actions (slight variations in the grass can make the ball roll in slightly different directions), also strategic because you do not know which actions the other agents will perform. sequential within one match: actions earlier in one match have influence on what happens later in that match. Different matches are arguably independent and thus episodic. Dynamic because there are things in the environment that change outside of the control of any agent (player), e.g., weather conditions. If the environment is reasonably well controlled (like a hall used for robotic soccer), it could be considered static. Continuous Variables such as positions, directions, velocities of players and ball are all continuous Multi-agent 22 players on the field (10 cooperative and 11 competitive) and one or multiple referees.

Which of the following sentences are tautologies (also called "valid" sentences)? 1. (A∧B)⇒B 2. (A∧B)∧(¬A∧¬B) 3. (A∨B)∨¬B 4. ¬A⇒A

1 and 3

Describe what an Agent function is in your own words

Defines which action an agent does depending on the sequence of all percepts it received

Explain the terms "admissible heuristics" and "consistent heuristics" in your own words.

Heuristics are used in informed search methods to estimate the path cost from a node to the closest goal. An admissible heuristics is one that never overestimates the cost, i.e., the cost of the optimal solution is always at least as high as the estimated cost. A consistent heuristics is one that makes the evaluation function (path cost to a node N plus estimated cost from N to goal) monotonically non-decreasing on each path. Or in other words, a consistent heuristics never decreases from one node to the successor node by more than the cost of the action taken.

Consider the following probability distribution given as a table of joint probabilities (e.g., the entry in the upper left corner is P(Toothache ∧ Catch ∧ Cavity)): toothache ¬toothache cavity ¬cavity cavity ¬cavity cavity: 0.108 0.012 0.072 0.008 ¬cavity: 0.016 0.064 0.144 0.576 What is P(Cavity | Catch) (accurate to 3 decimal places)?

0.529

Consider the following probability distribution given as a table of joint probabilities (e.g., the entry in the upper left corner is P(Toothache ∧ Catch ∧ Cavity)): toothache ¬toothache cavity ¬cavity cavity ¬cavity cavity: 0.108 0.012 0.072 0.008 ¬cavity: 0.016 0.064 0.144 0.576 What is P(¬Catch) (accurate to 3 decimal places)?

0.66

Which of the following sentences are satisfiable? 1. (A∧B)⇒B 2. (A∧B)∧(¬A∧¬B) 3. (A∨B)∨¬B 4. ¬A⇒A

1, 3 and 4

Which of the following statements are equivalent to "the knowledge base {P,¬Q,¬R} entails P∨Q"? {P,¬Q,¬R} ⊨ P∨Q P∧¬Q∧¬R⇒(P∨Q) is satisfiable. Every model of {P,¬Q,¬R} is also a model of P∨Q. Every model of P∨Q is also a model of {P,¬Q,¬R}. P∧¬Q∧¬R⇒(P∨Q) is a tautology (valid).

1, 3 and 5

Assume a propositional logic with propositional symbols A, B and C. How many models does the following sentence have? (A∨B)∧¬C

3

Which of these is not a correct sentence in propositional logic? 1. A∧B ∨ ¬A∧¬B 2. A∧¬B 3. A¬∧B 4. ¬A∨B

3

Assume a propositional logic with propositional symbols A, B and C. How many models does the following sentence have? A⇒(B∧C)

5

Explain Exploration vs. Exploitation in MCTS

A concept in MCTS where exploration seeks new paths and exploitation builds on known successful paths.

Explain Upper Confidence Bound(UCB) in MCTS

A formula used during the selection phase to balance between exploring new moves and building on known good moves.

What is a Monte Carlo Tree Search (MCTS)?

A search algorithm that uses randomness to solve problems that are too complex for traditional search strategies.

Describe what a Rational Agent is in your own words

An agent is rational if it always executes the action maximizing the expected outcome according to the performance measure. The expected outcome depends on the knowledge of the agent about the environment and the current state.

Alternative Modeling of Crossword Puzzle Problem

An alternative model uses variables for each row between shaded squares, where domains are words fitting the row's length. This model has fewer variables but potentially larger state spaces due to broader domain sizes per variable.

Discrete Environment

An environment characterized by a finite number of states and actions. It allows for clear and manageable definition and navigation of state and action spaces.

Single-Agent Environment

An environment inhabited by a single agent only, without the influence or interference of other agents. Decisions and actions within the environment are made solely by one agent.

Dynamic Environment

An environment that changes over time, either from external factors or from the actions of agents within it. These changes occur independently of the agent's current actions.

Fully Observable Environment

An environment where all aspects and components are visible and known to the agent. The agent has complete information about the environment's state at all times.

Competitive Environment

An environment where multiple agents interact with potentially conflicting objectives. Actions by any agent can influence outcomes for others, creating a competitive setting.

Sequential Environment

An environment where the current state and decisions affect future states. Actions taken by the agent create a chain of dependent events leading to sequential outcomes.

Deterministic Environment

An environment where the outcomes of all actions are predetermined and consistent, with no randomness affecting the results of actions taken by agents.

Benefits of BFS and A*

BFS and A*: Both are complete and optimal with admissible heuristics. Space and time complexities are exponential in M and C, but reduced to O(C*M) when discarding repeated states, handling all nodes optimally.

What is Breadth-First Search (BFS)?

Breadth-First Search (BFS) is a graph traversal method that explores levels of a graph one at a time, from the root outward. It uses a queue to manage nodes, ensuring it processes all nodes at one depth before moving to the next. BFS is ideal for finding the shortest path in simple, unweighted graphs.

1.1 Solution: The best move is b with value 60. 1.2 Solution: 60 (the same as the value of the best move) 1.3 Solution: We can cut the branch to m after visiting l, because α(e) = 50 (lower bound) but β(a) = 35 (upper bound). We can cut the branch to q after visiting p, because α(g) = 75 (lower bound) but β(b) = 60 (upper bound). We can cut the branch to i after visiting h, because β(c) = 40 (upper bound) but α(root) = 60 (lower bound). In all cases we can cut (i.e., stop expanding a node) if the α of a max-node is higher than (or equal to) the β of a min-ancestor or if the β of a min-node is lower than (or equal to) the α of a max-ancestor. That is, we cut if the lower bound reaches or exceeds the upper bound.

Consider the two-player, turn-taking, zero-sum game with the following game tree: Apply the α−β algorithm (with left-to-right expansion order) to compute the value of the root node! Mark which of the branches can be pruned and why. 1.1 (5 points) What is the best move of the max player in the initial state? 1.2 (5 points) What is the value of the root node? 1.3 (30 points) Which branches of the tree can be pruned and why? Name the branches to be cut and which bounds allow you to cut them (e.g., the branch to x can be cut because α(y) = 43 is greater than β(z) = 17)

Consider a generalized version of the cannibals and missionaries problem: C cannibals and M missionaries must cross a river. Their boat can only hold B people (B >= 2). The cannibals must never outnumber the missionaries on either side of the river or on the boat. Every person, whether missionary or cannibal, can row the boat. How can they all get over the river with the fewest number of trips? - For the general case with M missionaries and C cannibals and boat capacity B, give plausible estimates for • depth of shortest solution (the estimates may depend on M, C and B) and shortly explain how you got to those estimates. What numbers do you get for the special case of M=C=3 and B=2?

Depth of shortest solution (d) with boat fully loaded each trip, one row back: Formula: d ≥ 2*(M + C - B)/(B - 1) + 1. Estimates moves required for all except last full boat. For M=C=3, B=2: d = 9 moves.

Domains in CSP for Crossword Puzzles

Domains are the 26 letters of the alphabet for each blank square variable, or sets of words from the dictionary that match the length of each word space variable.

A perfectly rational poker-playing agent never loses.

False

For every fully observable, zero-sum two-player game with simultaneous moves (i.e., both players make a move at the same time) there is an optimal pure strategy for each player. (A pure strategy is a deterministic function that returns a legal move for a player given a state of the game.)

False

In a partially observable, turn-taking, zero-sum game between two perfectly rational players, it does not help the first player to know what strategy the second player is using -- that is, what move the second player will make, given the first player's move.

False

Consider a generalized version of the cannibals and missionaries problem: C cannibals and M missionaries must cross a river. Their boat can only hold B people (B >= 2). The cannibals must never outnumber the missionaries on either side of the river or on the boat. Every person, whether missionary or cannibal, can row the boat. How can they all get over the river with the fewest number of trips? - For the general case with M missionaries and C cannibals and boat capacity B, give plausible estimates for • average branching factor (the estimates may depend on M, C and B) and shortly explain how you got to those estimates. What numbers do you get for the special case of M=C=3 and B=2?

For M missionaries, C cannibals, and boat capacity B: Average branching factor considers possible groups on the boat from 0 missionaries to all missionaries, limiting scenarios where cannibals outnumber missionaries. Formula: B^2 /4 + B approximates actions; B^2 /4 for large B. Special case B=2: 3 actions.

Improving DFS

Improving DFS: By detecting and discarding repeated states, the infinite search tree becomes finite with at most (C+1)*(M+1)2 states, making DFS complete. Complexity becomes O(CM) as we visit all states.

Explain a Terminal State in MCTS

In MCTS, a state where the game ends, either with a win, loss, or draw, used during the simulation phase.

What happens if Minimax is used in a two-player, turn-taking game that is not zero-sum?

In non-zero-sum games, such as cooperative games, minimizing the opponent's score could inadvertently lower your own score too. Minimax does not accurately reflect player preferences in such scenarios, potentially leading to suboptimal decisions.

Could backtracking search be improved by detecting (and discarding) revisited states?

No, because there are no duplicate states in the search tree.

What is the space complexity of depth-first search (without detecting repeated states)? b ... branching factor d ... depth of the shallowest solution h ... height of the search tree s ... number of states (size of the state space)

O(b*h)

What is the space complexity of breadth-first search (without detecting repeated states)? b ... branching factor d ... depth of the shallowest solution h ... height of the search tree s ... number of states (size of the state space)

O(b^d)

What is the time complexity of breadth-first search (without detecting repeated states)? b ... branching factor d ... depth of the shallowest solution h ... height of the search tree s ... number of states (size of the state space)

O(b^d)

What is the time complexity of depth-first search (without detecting repeated states)? b ... branching factor d ... depth of the shallowest solution h ... height of the search tree s ... number of states (size of the state space)

O(b^h)

Consider the task of a robot playing a game of football (soccer) in a team with humans. - Give a PEAS description of this task environment.

Performance measure: winning the game, scoring goals, avoiding injuries, avoiding penalties. Environment: soccer field with goals, lines, other players, coach, referee(s), audience,weather conditions (if outdoors). Actuators: motors to control joints in legs, arms, etc, speaker (communication with other players and coach). Sensors: camera(s), microphones, touch sensors, potentially other sensors to get the. position on the field and distances to other players.

Consider a generalized version of the cannibals and missionaries problem: C cannibals and M missionaries must cross a river. Their boat can only hold B people (B >= 2). The cannibals must never outnumber the missionaries on either side of the river or on the boat. Every person, whether missionary or cannibal, can row the boat. How can they all get over the river with the fewest number of trips? - For the general case with M missionaries and C cannibals and boat capacity B, give plausible estimates for • size of the search tree (expanded as deep as the shortest solution) (the estimates may depend on M, C and B) and shortly explain how you got to those estimates. What numbers do you get for the special case of M=C=3 and B=2?

Search tree size, expanded to depth of shortest solution, calculated with: (B^2 /4)^(2*(M+C)/(B−1)). For large values, depends on M, C, small B. Special case M=C=3 and B=2 yields 19683 nodes

Consider a generalized version of the cannibals and missionaries problem: C cannibals and M missionaries must cross a river. Their boat can only hold B people (B >= 2). The cannibals must never outnumber the missionaries on either side of the river or on the boat. Every person, whether missionary or cannibal, can row the boat. How can they all get over the river with the fewest number of trips? - For the general case with M missionaries and C cannibals and boat capacity B, give plausible estimates for • size of the state space (the estimates may depend on M, C and B) and shortly explain how you got to those estimates. What numbers do you get for the special case of M=C=3 and B=2?

State space size depends on missionary, cannibal distribution, and boat position. Total states formula: 2*(C+1)*(M+1), considering all combinations of people on one river side. For M=C=3: 32 states.

Consider the following problem: Three cannibals and three missionaries must cross a river. Their boat can only hold two people. The cannibals must never outnumber the missionaries on either side of the river. Every person, whether missionary or cannibal, can row the boat. - Give a complete formulation of the problem as a search problem. Make the formulation precise enough to be implementable. For example, you could write down data structures for states and actions and pseudo code for other components of a search problem.

Suppose the cannibals, missionaries and the boat start on side A if the river and need to go to side B. Costs of all actions are 1 (we want to know the shortest path). class Action { int mm,mc; // number of missionaries and cannibals that are moved // to the other side of the river } class State { int m,c; // number of missionaries and cannibals on side A of the river boolean sideA; // true iff boat is at side A of the river List<Action> legalActions() { List<Action> result = []; for (mm,mc) in [(1,0), (2,0), (1,1), (0,1), (0,2)] do // check three conditions: // 1. have at least that many missionaries and cannibals at the // side of the river with the boat // 2. have no missionaries or at least as many missionaries as // cannibals on side A after moving the boat // 3. have no missionaries or at least as many missionaries as // cannibals on side B after moving the boat if (sideA && mm<=m && mc<=c && (m-mm>=c-mc || m-mm==0) && (3-m+mm>=3-c+mc || 3-m+mm == 0)) result.add(new Action(mm,mc)); if (sideB && mm<=3-m && mc<=3-c && (m+mm>=c+mc || m+mm==0) && (3-m-mm>=3-c-mc || 3-m-mm==0)) result.add(new Action(mm,mc)); } State successorState(Action a) { if (sideA) return new State(m=this.m-a.mm, c=this.c-a.mc, sideA = false); else return new State(m=this.m+a.mm, c=this.c+a.mc, sideA = true); } boolean isGoalState() { return m==0 && c==0; } } initialState = new State(m=3, c=3, sideA=true); Other solutions are possible, and a little less detail would be acceptable as well. (6 points for a useful model for a state containing at least the numbers of missionaries and cannibals and position of the boat, 6 points for a somewhat understandable way of checking which actions are possible, 6 points for correct state update, 6 points for a correct goal test, 6 points for the cost function)

What are the game constraints in the Fhourstones Benchmark, and how does random move ordering affect tree depth?

The game can last up to 42 steps. Random move ordering leads to pruning that effectively reduces the tree depth to about 75% of the original, or approximately 31.5 steps.

Consider the following problem: Three cannibals and three missionaries must cross a river. Their boat can only hold two people. The cannibals must never outnumber the missionaries on either side of the river. Every person, whether missionary or cannibal, can row the boat. - Give plausible estimates for • average branching factor • depth of shortest solution • size of the state space • size of the search tree (expanded as deep as the shortest solution) and shortly explain how you got to those estimates.

There are at most 5 actions. However some of those will only be possible in some situations. E.g., (0,2) and (2,0) are only possible at the same time if there are exactly two missionaries and cannibals on the current side of the river. Also in most situations, one of (0,1) and (1,0) will lead to having too many cannibals on one side of the river. So, on average there will probably be around 3 legal actions. There is always at least one action, because we can always just send the same people back. That means 1 ≤ b ≤ 5 and on average we would expect b = 3. The first move should bring two people to side B. However, then someone has to row back to fetch the other 4 people. So it will take another 8 steps to bring those 4 over to side B. Thus, we would expect the shortest solution to be at least in depth d = 9, maybe deeper if we sometimes have to row back with two people so that no missionary gets eaten. Thus the search tree (if only expanded to the depth of the shortest solution) will have roughly b d = 39 = 19683 nodes. The states are determined by how many missionaries and cannibals are on both sides of the river and where the boat is. We only need to know the number of missionaries and cannibals on one side of the river, the other side is then determined because we know the total number. There can be between 0 and 3 cannibals and between 0 and 3 missionaries on one side (although not all combinations are possible), so we have at most 4 (possible different numbers of missionaries) * 4 (possible different numbers of cannibals) * 2 (positions of the boat) = 32 states. (5 points each for a somewhat believable estimate and explanation for the four things. Note, that the estimates here depend on the answer to the previous question, that is: How is the problem modeled? What are the actions and states?)

Consider a generalized version of the cannibals and missionaries problem: C cannibals and M missionaries must cross a river. Their boat can only hold B people (B >= 2). The cannibals must never outnumber the missionaries on either side of the river or on the boat. Every person, whether missionary or cannibal, can row the boat. How can they all get over the river with the fewest number of trips? - Give a complete formulation of the problem as a search problem. Think of what information is needed for a state of the problem and what the possible actions are. Make the formulation precise enough to be implementable. For example, you could write down data structures for states and actions and pseudo code for other components of a search problem.

This is very similar to hw1, with the difference that the total number of missionaries and cannibals is a variable and the capacity of the boat is a variable. Suppose the cannibals, missionaries and the boat start on side A if the river and need to go to side B. Costs of all actions are 1 (we want to know the shortest path). A state represents the number of cannibals and missionaries on side A and which side the boat is currently. The number of cannibals and missionaries that are on the other side can easily be inferred from this and the total. int M, C; // total numbers of missionaries and cannibals int B; // capacity of the boat class Action { int mm,mc; // number of missionaries and cannibals that are moved // to the other side of the river } class State { int m,c; // number of missionaries and cannibals on side A of the river boolean sideA; // true iff boat is at side A of the river List<Action> legalActions() { List<Action> result = []; for mm in 0..B and mc in 0..B-mm do if all conditions are met: result.add(new Action(mm,mc)); } State successorState(Action a) { if (sideA) return new State(m=this.m-a.mm, c=this.c-a.mc, sideA = false); else return new State(m=this.m+a.mm, c=this.c+a.mc, sideA = true); } boolean isGoalState() { return m==0 && c==0; } } initialState = new State(m=M, c=C, sideA=true); Other solutions are possible, and a little less detail would be acceptable as well. (10 points for a correct model, -2 points each for mistake or something that is missing: state definition, legal moves, state update, goal test, cost function)

In a fully observable, turn-taking, zero-sum game between two perfectly rational players, it does not help the first player to know what strategy the second player is using -- that is, what move the second player will make, given the first player's move.

True

Suppose an agent selects its action uniformly at random from the set of possible actions. There exists a deterministic task environment in which this agent is rational.

True

There exist task environments in which no simple refleterm-69x agent can behave rationally.

True

Variables in CSP for Crossword Puzzles

Variables are defined either as each blank square on the grid or each word space (horizontal or vertical sequence of cells surrounded by the grid border or shaded cells).

b and e

Which of the following values can be found directly in the probability tables of this simple Bayesian network? a. P(A | B) b. P(A) c. P(B | C and D) d. P(C | A and B) e. P(B | A)

State Space and Search Tree Size Estimation for Crossword CSP

With mostly blank squares on a grid of width w and height h, the search tree could have up to 26^(w∗h) nodes. Partial assignments possible are 27^(w∗h), influenced by constraints and the word list.

What impact would skipping alpha-beta pruning have on the time required to solve the Connect-4 game using the Fhourstones Benchmark?

Without alpha-beta pruning, the depth is not reduced, and about 1.7 trillion states would be required. It would take approximately 2 days to complete at the same state generation speed.

Select all answers that are equivalent to "A and B are independent random variables". a. P(A|B) = P(A) b. Observing the value of B gives no further information about the probability of A. c. P(A and B) = P(A)*P(B) d. P(A or B) = P(A) + P(B)

a, b and c

Iterative deepening depth-first search (IDDFS) combines breadth-first serach (BFS) and depth-first search (DFS). Mark all that apply: a. Iterative deepening is complete b. Iterative deepening is optimal when BFS is c. Iterative deepening expands new nodes in the same order as DFS d. the time complexity of IDDFS is O(b*d) e. the space complexity of IDDFS is the same as for DFS

a, b and e

Mark all that apply! Minimax with Alpha-Beta pruning ... a. is optimal exactly when Minimax is. b. is a breadth-first search c. never expands more states than pure Minimax. d. at best reduces the number of state expansions by half. e. can double the search depth without increasing how much time is spend, in the best case. f. is most efficient if the worst move is expanded first.

a, c and e

Which of the following statements about model-based agent and a goal-based agents are true? a. Both agents think ahead to find the action that best solves the problem. b. Both agents have an internal model of the world. c. Both agents decisions are based on predefined rules. d. Both agents have an explicit representation of their goal.

b

A constraint satisfaction problem is defined by (mark all that apply) a. a state space b. a set of constraints between the variables c. a program to solve the constraints d. a set of variables e. domains for each variable f. actions

b, d and e

Which of the following statements are true about consistent heuristics? a. h(n) = 0 (the function estimating the remaining cost to be always 0) is admissible, but not consistent. b. A consistent heuristics never decreases from one node to the successor node by more than the cost of the action taken. c. Any admissible heuristic is also consistent. d. Any consistent heuristic is also admissible. e. A consistent heuristics is one that makes the evaluation function (path cost plus heuristic) monotonically non-decreasing on each path. f. With a consistent heuristic A* is optimal, even when repeated states are discarded. g. A consistent heuristic returns the same value for every node. h. It can be that A* is not complete if we discard repeated states but only have an admissible (but not consistent) heuristic.

b, d, e and f

Which of the following cases indicates underfitting?

high training error, high test error

Is depth-first search guaranteed to be optimal?

no

Is breadth-first search guaranteed to be optimal?

only if all actions have the same cost

Is depth-first search guaranteed to be complete?

only if the state space is finite and we detect cycles


Related study sets

ACCT 202 - Corporations Review Quiz (CH 11)

View Set

peter Fahmy 2017 day 444 -= En شرح وتمارين 1

View Set

Business Analytics - All Quizzes

View Set

Cognitive Psych Ch. 2 Quiz - Cognitive Neuroscience

View Set