Artificial Intelligence Midterm
Which statement about backtracking heuristics is true? Select all that apply. - "Least Constraining Value" chooses the variable with minimum number of constraints. - "Minimum Remaining Values (MRV)" chooses the remaining values that are minimum in the variable domain. - "Most Constraining Variable" chooses the variable involved in largest number of constraints on remaining variables. - Using heuristics may detect failure of a path earlier.
"Most Constraining Variable" chooses the variable involved in largest number of constraints on remaining variables. Using heuristics may detect failure of a path earlier.
Which statement about arc consistency is true? Select all that apply. - A variable in a CSP is arc-consistent if every value in its domain satisfies the variable's binary constraints. - C_xy is arc-consistent with respect to variable x if for every value in the domain D_x there is some value in the domain D_y that satisfies the binary constraint on the arc (x, y). - C_xy is arc-consistent with respect to variable x if for every value in the domain D_x all values in the domain D_y satisfy the binary constraint on the arc (x, y). - If C_xy is arc-consistent with respect to variable x, we can logically infer that C_xy is arc consistent with respect to y as well.
A variable in a CSP is arc-consistent if every value in its domain satisfies the variable's binary constraints. C_xy is arc-consistent with respect to variable x if for every value in the domain D_x there is some value in the domain D_y that satisfies the binary constraint on the arc (x, y).
Which of the following statements is true. Select all that apply. - A* search uses this evaluation function: f(n) = g(n) + h(n) - Greedy best-first search uses this evaluation function: f(n) = g(n) - h(n) - g(n) is the estimated cost to reach the goal. - h(n) is the cost to reach the current node from the start node.
A* search uses this evaluation function: f(n) = g(n) + h(n)
Check all that apply regarding Alpha-Beta Pruning algorithm. - Alpha-beta pruning uses recursion and DFS strategy to send back up utility values from the terminal (leaf) nodes. - Alpha-beta pruning algorithm keeps track of alpha and beta values for the nodes that it visits and uses them to prune the search tree. - At any visited node, if alpha or beta gets updated such that alpha < = beta , then we prune the rest of the children of that node from the search tree. - Alpha is updated at Max nodes and beta is updated at Min nodes.
Alpha-beta pruning uses recursion and DFS strategy to send back up utility values from the terminal (leaf) nodes. Alpha-beta pruning algorithm keeps track of alpha and beta values for the nodes that it visits and uses them to prune the search tree. Alpha is updated at Max nodes and beta is updated at Min nodes.
Which statement regarding CSP solution is true? Select all that apply. - A complete assignment means all constraints are satisfied. - An assignment that does not violate any constraints is called a consistent or legal assignment. - A solution must be a complete and consistent assignment. - A partial assignment is one that assigns values to only some of the variables.
An assignment that does not violate any constraints is called a consistent or legal assignment. A solution must be a complete and consistent assignment. A partial assignment is one that assigns values to only some of the variables.
Which of the following is NOT a kind of agent, but rather a way of state representation? - Simple reflex - Atomic - Goal-based - Utility-based
Atomic
Which of the following statements is true? Select all that apply. - BFS is complete if the branching factor b is finite. - The tree-search version of DFS is not complete even if the branching factor b and the depth d are finite. - BFS is always optimal under any circumstances. - Both versions of DFS (tree-search and graph-search) are non-optimal.
BFS is complete if the branching factor b is finite. The tree-search version of DFS is not complete even if the branching factor b and the depth d are finite. Both versions of DFS (tree-search and graph-search) are non-optimal.
Which statement about backtracking is true? Select all that apply. - Backtracking has inefficiency issue because it may explore areas of the search space that aren't likely to succeed. - Minimum Remaining Values (MRV), Most Constraining Variable and Least Constraining Value are among the heuristics used to make backtracking more efficient. - Backtracking uses BFS. - Backtracking guarantees to find a solution for CSP.
Backtracking has inefficiency issue because it may explore areas of the search space that aren't likely to succeed. Minimum Remaining Values (MRV), Most Constraining Variable and Least Constraining Value are among the heuristics used to make backtracking more efficient.
Which one is the best example of "Thinking Humanly" approach in AI?
Cognitive Modeling
Which of the following statements is true? Select all that apply. - Admissibility is a stronger condition than consistency. - Consistency is a stronger condition than admissibility. - Every consistent heuristic is also admissible. - Every admissible heuristic is also consistent.
Consistency is a stronger condition than admissibility. Every consistent heuristic is also admissible.
The Minimax algorithm uses a strategy for search similar to: - BFS - DFS - A* - Uniform cost search
DFS
What is the search algorithm that is used in solving constraint search problems? - BFS - DFS - Iterative Deepening Search - Uniform cost search
DFS
What type of state representation is used in CSP? - Atomic - Factored - Structured - Indivisible
Factored
A* search and Greedy best-first search both use the same evaluation function f(n).
False
Arc consistency keeps track of remaining legal values for the unassigned variables.
False
Gradient descent is an optimization algorithm but is never used in Machine Learning due to its problems.
False
Greedy Best First Search guarantees both completeness and optimality.
False
In Alpha-beta pruning algorithm, you initialize Alpha with +infinity and Beta with -infinity.
False
Local maximum is a point where no other point in the entire search space may have a higher objective function than that.
False
The space complexity of DFS is O(b^m) - b to the power of m - where b is branching factor and m is the maximum depth of the search tree.
False
Unary constraint restricts the value of two variables.
False
You can use adversarial search in a single-player puzzle game such as Sudoku, or Rubik's cube.
False
Which of the following properties may apply to the chess game environment? Select all that apply. - Fully observable - Deterministic - Semidynamic (when played with a clock) - Discrete
Fully observable Deterministic Semidynamic (when played with a clock) Discrete
Which of the following is NOT a task environment property? - Fully observable vs partially observable - Episodic vs sequential - Single-agent vs multi-agent - Goal-based vs utility-based
Goal-based vs utility-based
Which of the following statements is true? Select all that apply. - Hill climbing may get stuck in local maximum. - Hill climbing may get stuck in plateau - flat area. - Random restart is one way to overcome hill climbing failure. - Hill climbing sometimes allows for bad moves, e.g. going to a lower state.
Hill climbing may get stuck in local maximum. Hill climbing may get stuck in plateau - flat area. Random restart is one way to overcome hill climbing failure.
Which of the following statements are true in the context of state representation? Select all that apply. - In an atomic representation, each state of the world is indivisible - it has no internal structure. - In an atomic representation, each state of the world can be split to a fixed set of variables or attributes. - In a factored representation, each state of the world can be split to a fixed set of variables or attributes. - Search algorithms like BFS and DFS work with atomic representations.
In an atomic representation, each state of the world is indivisible - it has no internal structure. In a factored representation, each state of the world can be split to a fixed set of variables or attributes. Search algorithms like BFS and DFS work with atomic representations.
Which of the following is contraposition? - P -> ¬Q ≡ Q -> ¬P - P -> Q ≡ Q -> P - P -> ¬Q ≡ ¬Q -> P - ¬P -> Q ≡ Q -> ¬P
P -> ¬Q ≡ Q -> ¬P
Which of the following logical sentences is satisfiable (possibly True or False)? Check all that apply. F means False, T means True. - P V F - T ∧ (T ∧ A) - Student passed the test. /// Passed (Test) is a boolean statement. - Student did NOT pass the test. /// Passed (Test) is a boolean statement.
P V F T ∧ (T ∧ A) Student passed the test. /// Passed (Test) is a boolean statement. Student did NOT pass the test. /// Passed (Test) is a boolean statement.
Which of the following sentences is in CNF? Check all that apply. - P V Q - P ∧ Q - (P V Q) ∧ R - P -> Q
P V Q P ∧ Q (P V Q) ∧ R
Which of the following logical sentences is "unsatisfiable" (ALWAYS FALSE)? Check all that apply. (F means False, T means True) - P ∧ ¬P -P ∧ T - (P V ¬P) V F - (¬P -> Q) ∧ ¬Q
P ∧ ¬P
What are the assumptions about adversarial search? Select all that apply. - Perfect information, i.e. both players have access to complete information about the states. - Zero-sum , i.e. the utility values at the end of the game are equal and opposite. - The search environment is fully observable and deterministic. - There is only one single agent who searches the state space.
Perfect information, i.e. both players have access to complete information about the states. Zero-sum , i.e. the utility values at the end of the game are equal and opposite. The search environment is fully observable and deterministic.
Which of the following is "unit resolution"? - Premise: A V B, ¬B V C Conclusion: A V C - Premise: A V B, ¬B Conclusion: A - Premise: A ∧ B, ¬B Conclusion: B - Premise:P→Q, P Conclusion: Q
Premise: A V B, ¬B Conclusion: A
Which of the following statements is true? Select all that apply. - Simulated annealing sometimes allows for bad moves based on a probability that decreases over time. - Simulated annealing sometimes allows for bad moves based on a probability that increases over time. - Simulated annealing does not allow bad moves, it always strives for good moves. - The temperature T decreases over time (cools down) to reduce the probability of bad moves by simulated annealing.
Simulated annealing sometimes allows for bad moves based on a probability that decreases over time. The temperature T decreases over time (cools down) to reduce the probability of bad moves by simulated annealing.
Which of the following statements is true about space complexity? Select one answer only. - Space complexity deals with memory only. - Space complexity deals with both memory and time. - Space complexity means the optimality of a solution. - Space complexity means transition model.
Space complexity deals with memory only.
Which of the following statements is true? Select all that apply. - Space complexity of DFS is O(bm) where b is the branching factor and m is the maximum depth of the search tree. - Space complexity of DFS is lower than BFS. - Time complexity of BFS is always much better than DFS. - Depth-first tree search needs to store only a single path from the root to a leaf node, along with the remaining unexpected sibling nodes for each node on the path, and that is why DFS has a lower space complexity than BFS.
Space complexity of DFS is O(bm) where b is the branching factor and m is the maximum depth of the search tree. Space complexity of DFS is lower than BFS. Depth-first tree search needs to store only a single path from the root to a leaf node, along with the remaining unexpected sibling nodes for each node on the path, and that is why DFS has a lower space complexity than BFS.
A* usually runs out of space long before it runs out of time. For this reason, A* is not practical for many large-scale problems.
True
An admissible heuristic always underestimates the true cost of getting from a state to the goal state.
True
BFS and DFS are both uninformed search.
True
Backtracking algorithm is a CSP solver.
True
Comparing AI, Machine Learning, and Deep Learning, one can argue that there is a superset-subset relationship between them such that Deep Learning is a subset of Machine Learning approaches, and Machine Learning is a subset of the broad field of approaches, algorithms and techniques in AI.
True
IDA*, Recursive Best First Search (RBFS) and SMA* are memory-bounded heuristic searches that try to overcome the issue with A* space complexity - high memory usage.
True
In Genetic Algorithm, each location in the offspring (after crossover) is subject to random mutation with a small independent probability.
True
Intelligence is concerned mainly with rational action.
True
Iterative deepening DFS combines the benefits of DFS and BFS. Its space complexity is O(bm) like DFS and it is complete like BFS if the branching factor is finite.
True
Rationality is NOT omniscience. Omniscience is usually impossible in reality.
True
Selection, crossover and mutation are all random processes.
True
The interdisciplinary field of Cognitive Science brings together computer models from AI and experimental techniques from psychology to construct precise and testable theories of the human mind.
True
The term percept refers to the agent's perceptual inputs at any given instant.
True
The time complexity of BFS is O(b^d) where b is branching factor and d is the depth of the shallowest solution.
True
The tree-search version of A* is optimal if h(n) is admissible, while the graph-search version is optimal if h(n) is consistent.
True
When a CSP is not arc consistent, we may make it arc consistent by using the AC3 algorithm with no guarantee for all problems.
True
α entails β iff α->β is valid.
True
Which one is the best example of "Acting Humanly" approach in AI?
Turing Test
Which of the following is NOT an assumption about the search environment? - Unknown - Deterministic - Discrete - Observable
Unknown
If h1 and h2 are both admissible heuristics, which of the following heuristics are guaranteed to be admissible. Select all that apply. - max(h1, h2) - min(h1, h2) - h1 + h2 - h1 * h2
max(h1, h2) min(h1, h2)
Which of the following logical sentences is "Valid" (ALWAYS TRUE)? Check all that apply. - ¬P V T - F ∧ ¬F - P -> P - ¬Q -> Q
¬P V T P -> P
Which of the following statements about First-Order Logic equivalences is true? Select all that apply. - ¬∀ x P(x) ≡ ∃ x ¬P(x) - ¬∃ x P(x) ≡ ∀ x ¬P(x) - ∃ x P(x) ≡ ¬∀ x ¬P(x) - ∀ x P(x) ≡ ¬∃ x ¬P(x)
¬∀ x P(x) ≡ ∃ x ¬P(x) ¬∃ x P(x) ≡ ∀ x ¬P(x) ∃ x P(x) ≡ ¬∀ x ¬P(x) ∀ x P(x) ≡ ¬∃ x ¬P(x)
How do you express this statement in First Order Logic? Choose the best answer only. All birds except penguin fly. - ∀ x bird(x) ∧ ¬penguin(x) ∧ Fly(x) - ∀ x bird(x) -> (¬penguin(x) ∧ Fly(x)) - ∀ x (bird(x) ∧ ¬penguin(x)) -> Fly(x) - ∃ x bird(x) ∧ ¬penguin(x) ∧ Fly(x)
∀ x (bird(x) ∧ ¬penguin(x)) -> Fly(x)
How do you express this statement in First Order Logic? Select all that apply. No robot is human. - ∀ x Robot(x) -> ¬Human(x) - ∀ x Human(x) -> ¬Robot(x) - ¬∃ x Robot(x) ∧ Human(x) - ¬∃ x Robot(x) -> Human(x)
∀ x Robot(x) -> ¬Human(x) ∀ x Human(x) -> ¬Robot(x) ¬∃ x Robot(x) ∧ Human(x)
How do you express this statement in First-Order Logic? Choose the best answer only. No person buys an expensive policy. - ∀ x ∃ y (Person(x) ∧ Policy(y) ∧ Expensive(y)) -> ¬Buys(x,y) - ∀ x ∀ y Person(x) ∧ Policy(y) ∧ Expensive(y) -> ¬Buys(x,y) - ∃ x ∃ y (Person(x) ∧ Policy(y) ∧ Expensive(y)) -> ¬Buys(x,y) - ∀ x ∀ y (Person(x) ∧ Policy(y) ∧ Expensive(y)) -> ¬Buys(x,y)
∀ x ∀ y (Person(x) ∧ Policy(y) ∧ Expensive(y)) -> ¬Buys(x,y)
How do you express this statement in First Order Logic? Choose the best answer only. Some kids like candy. - ∀ x Kid(x) -> Likes(x, candy) - ∀ x Kid(x) ∧ Likes(x, candy) - ∃ x Kid(x) -> Likes(x, candy) - ∃ x Kid(x) ∧ Likes(x, candy)
∃ x Kid(x) ∧ Likes(x, candy)