AI Midterm 1

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Search Tree

A "what if" tree of plans and their outcomes. Start state is the root node. Children correspond to successors. For most problems we can never actually build the whole tree. Each node in this represents a whole path in the state space graph. Construct on demand and as little as possible. Does not work with loops - will become infinitely large.

Total Turing Test

A Turing test designed to test an AI's physical, vision and motion, capabilities. Also requires vision (recognize actions and objects), actuation (act upon objects as requested), and smell.

And-Elimination

A ^ B A

PSPACE

A complexity class that is larger/harder than NP. Planner: ask for a sequence of actions that, if executed from a state, will make goal become true in a future state (PSPACE). Theorem prover: ask if a sentence is true given KB (does not have the notion of state transition) - (NP).

Planning Graph

A data structure used to give heuristic estimates. Can be applied to any of the search techniques. Will never overestimate; and often very accurate. More on Lecture 7b slides. Polynomial in the size of the planning problem instead of being "exponential" in size. If any goal literal fails to appear in the final level of the graph, then the problem is unsolvable. The cost of achieving any goal (g) can be estimated as the level at which (g) first appears in the planning graph constructed from (s) as the initial state.

Horn Clause

A disjunction of literals of which at most one is positive. Exactly one is positive (definite clause) or none is positive - example on slides. Can perform horn resolution with this. Forward chaining, too. Horn clause resolution is backwards chaining.

Forward Chaining

A scenario where the AI has been provided with a specific problem must "work forwards" to figure out how to solve the set problem. To do this, the AI would look back through the rule-based system to find the "if" rules and determine which rules to use.

Satisfiability

A sentence is "satisfiable" if it is true in "some" models (i.e., in at least one model). Examples on slide deck 5a, add these to cheat sheet.

Atomic Sentences

A simple sentence. SchoolOf ( Bob ) Colleague ( TeacherOf ( Alice ), TeacherOf ( Bob) ) >( +(x y), x )

Validity

A statement is valid if it is true in all models. Valid sentences are also called "tautologies". Ex: True V A, A V (not A), A↔A, more on slide deck 5a - add these to cheat sheet. For any alpha, false entails alpha. A is valid iff True entails A. A sentence is "valid" iff its negation is "unsatisfiable" - proof by contradiction.

Skolemization

A way of removing existential quantifiers from a formula. Variables bound by existential quantifiers which are not inside the scope of universal quantifiers can simply be replaced by constants: ∃x[x < 3] ∃ x [ x < 3 ] can be changed to c<3. c < 3. , with c a suitable constant. Examples on Lecture 7a slides.

Limited Visibility

Agent may not be able to see the whole map.

Learning Agent

Agent that asks: How can I adapt to my environment? How can I learn from my mistakes?

Utility-Based Agents

Agent that looks at how well can the goal be achieved - degree of happiness. What to do if there are conflicting goals (speed and safety, for example). Which goal should be selected if several can be achieved? "What will it be like if I do action A?" and "How will I be happy in such a state?".

Depth-Limited Search

Aim to make DFS terminating. It's memory efficient: O(bm), but it's incomplete and not optimal. May not terminate, may not find best solution. Nodes at depth l are treated as though they have no successors. Can limit by number of nodes, a certain radius distance, etc. In general, finding a tight depth "limit" is a hard problem.

Planning Agent

An agent that constructs a plan to meet a goal (robot manipulators. Problem-solving agents find a sequence of actions to reach the goal, deals with "atomic" representations of states, and needs "domain-specific heuristics" to perform well. These can also find a sequence of actions that result in a goal state, but it uses a "factored" representations of states (restricted logic formules) and can have "generic" heuristics for search. Opens up action and goal representations. Data structures mapped to purposes on slides - Lecture 7b.

Reduce FOL to PL

Apply both existential and universal eliminations and then discard the quantified sentences. Universal elimination using all ground-term substitutions from the vocabulary of the knowledge base. EXAMPLE on slides - Lecture 7a. Do this to use inference. Every FOL knowledge base (KB) and query (α) can be reduced to propositional (PL) logic to preserve entailment - since PL inference is both "sound" and "complete", it seems that we will have a similar procedure for FOL... Not so fast! When KB includes a function symbol, the set of possible ground-term substitutions is infinite! - Father(John), Father(Father(John)), Father(Father(Father(John)))... Reducing FOL to PL is rather inefficient; sometimes, direct inference seems more intuitive. Step 1: Find some (x) such that (x) is a king and (x) is greedy. Step 2: Infer that this particular (x) is evil.

Agent

Architecture + program.

Monotonicity of Logical Systems

As more information is added to KB, the set of entailed sentences can only increase (and never decrease).

Actions

Assume the current town is "Arad". From here we can perform: Go(Zerind), Go(Sibiu), Go(Timisoara).

Propositional Logic

Boolean vars are either true or false, can use NOT (¬), AND (^), OR (V), IMPLIES (→), and EQUIVALENT (↔). A truth table can tell you, for each logical formula, the number of models (satisfying assignments). Limited because it only makes the ontological commitment that a world consists of facts (propositions that are either true or false) - it's difficult to represent even Wumpus World.

Relaxed

By this we mean our version of the problem has an "over-approximation" of the transition model of the original problem. More edges between states, to provide short cuts. To get "under-approximation" of the cost. Original transition model may be something like: a tile can move from square A to square B if A is horizontally or vertically adjacent to B, and B is blank. Relaxed versions would be: a tile can move from square A to square B if A is adjacent to B, a tile can move from square A to square B if B is blank, or a tile can move from square A to square B.

Sentence

Can be an atomic sentence or complex sentence. An atomic sentence is one variable or true/false, a complex sentence is any combination of smaller things connected using and, or, implies, equivalent, etc.

Logical Inference

Can be based on entailment (model checking) or based on inference rules (theorem proving).

8-Queens

Can be formulated in two ways: complete-state formulation or incremental formulation. Complete-state formulation would be: Q1 = {1, ... , 64}, etc. where each has 64 options, which would give 64 * 63 * .... * 57 combinations. Incremental formation would be: Q1 = {1, ... , 8} etc. for each queen and would give 8! combinations (number represents a column).

Complexity

Can be quantified in a theoretical CS way (like Big-O), or in an AI applicative method: b is the branching factor (max number of successors of any node) and d is the depth of the shallowest goal state along any path in state space.

Planning Heuristic

Can be used for state-space search. Neither forward nor backward search is efficient without a good heuristic function. Need an admissible heuristic - i.e., never overestimate the distance from a state (s) to the goal.

Resolution-Based Inference

Can determine satisfiability or unsatisfiability. If resolution leads to an "empty clause" (false) → UNSAT - Else → SAT. Both sound and complete.

Performance Measure

Characterize how successful an agent is (speed, power usage, cost, etc.).

Complex Sentences

Combination of simple / atomic sentences. S1 ^ S2 S1 V S2 (S1 ^ S2) V S3 S1 -> S2 S1 <-> S3 Colleague ( Paolo, Maja ) -> Colleague ( Maja, Paolo ) Student (Alex, Paolo) -> Teacher (Paolo, Alex) ∀ x, y Brother(x, y) -> Sibling(x, y) ("brothers are siblings") ∀ c, d FirstCousin(c, d) ↔ ∃ p, ps Parent(p, d) ^ Sibling(p, ps) ^ Child(c, ps) ("a first cousin is a child of a parent's sibling") MORE EXAMPLES on slides - Lecture 6b - put these on your cheat sheet

Knowledge Base

Consists of a set of sentences, each about something of the environment that the agent knew. Truth depends on interpretation - there are multiple representations of the physical world.

Search Problem

Consists of a state space, a successor function, and a start and end state. A solution transforms the start state to a goal state.

Basic FOL Elements

Constant symbols (1, 5, A, B, JPL, etc.), function symbols (+, sqrt, SchoolOf, TeacherOf, etc.), equality (=), predicate symbols (>, friend, student, colleague), variables (x, y, z, next, first), connectives (V, ^, ->, etc.), and quantifiers (for all or for each).

Semantics

Defines the meaning of sentences in a KB.

Transition Model

Describes how the world evolves after an action is taken from a state. RESULT( In(Arad), Go(Zerind) ) = In( Zerind ).

Discrete

Distinct. Like a game of chess, not like driving a taxi. A crossword or poker game.

Quantifier Duality

Each quantifier can be expressed in terms of the other. ∀ x Likes(x, ice cream) is same as ¬∃ x ¬Likes(x, ice cream) ∃ x Likes(x, broccoli) is same as ¬∀ x ¬Likes(x, broccoli) Proof of duality on Lecture 6b slides.

Mutex Actions

Effects contradict each other • Eat(Cake).effect vs. Have(Cake).effect Preconditions contradict each other - Bake(Cake).precond vs Eat(Cake).precond Interference (one action's effect contradicts the other action's precond) - Eat(Cake).effect vs Have(Cake).precond

Model Checking

Enumerate all models to check if "α is true in all models in which KB is true". This is sound and complete. Truth-preserving: only entailed sentences will be derived. Can derive any sentence that is entailed. When playing the Wumpus World game, for example, the agent is guaranteed to be as good as anyone (or anything) who plays the game. Can use truth tables.

Hostile

Environment in which you play against an opponent, rather than a teammate.

Greedy Best-First Search

Expand node that appears to be closest to the goal since it's likely to lead to a solution quickly (i.e. f(n) = h(n), assume the path cost is 0). Trace through this on slides. Tree search is not complete even in a finite state space (can have infinite loop). Graph search is complete but only in a finite state space. When it works it's very fast due to pruning of the state space. Not complete, not optimal, time complexity can be exponential in the worst case but often with dramatic improvement, and for space it keeps all nodes in memory.

Tree Search

Expand out potential plans / nodes. Keep a frontier of partial plans under consideration. Try to expand as few tree nodes as possible. Key ideas: frontier and expansion. Main question: which frontier nodes should be explored? Pseudocode on second slide deck. Offline, simulated exploration of state space by generating successors of the already-explored states. Problems arise with redundant paths and loopy paths (visiting same state multiple times). Graph search fixes those problems.

Axioms

Facts given by the KB.

Unification

Finding substitutions that make different logical formulas look identical. Unify(p, q) = theta where SUBST(theta, p) = SUBST(theta, q) Example on Lecture 7a slides. unify( P(a,X), P(a,b) ), σ = { X/b } unify( P(a,X), P(Y,b) ), σ = { Y/a, X/b } unify( P(a,X), P(Y,f(a) ), σ = { Y/a, X/f(a) } unify( P(a,X), P(X,b) ), σ = failure

High-Order Logic

First-order logic allows us to quantify over objects. This allows us to quantify over objects, relations, and functions. "two objects are equal if and only if all properties applied to them are equivalent". These are more expressive; however, it is not clear (yet) how to effectively reason about sentences in higher-order logic. Example of using this type of KB on the slides - Lecture 6b.

Substitution

Given a KB with axioms, use this to infer sentences. Replace a variable by a ground term (a term without variables - a constant, function applied to a constant, or a function applied to another ground term - {x/John}, {x/Richard}, and {x/Father(John)} as examples).

Space Complexity

How much space in memory does a search require?

Entailment

If KB entails alpha, in every model in which KB is true, alpha is also true (alpha follows logically from KB). Alpha entails beta only if M(alpha) is subset or equal to M(beta). To show this, show that for all models in KB, the statement is true. "Checking entailment" can be done by "checking validity" or unsatisfiability. A entails B iff A implies B is valid. A entails B iff A ^ !B is unsatisfiable.

Modus Ponens

If P then Q, P Therefore Q P1 V P2 V P3, !P2 P1 V P3 P1 V P3, !P1 V !P2 P3 V !P2

Herbrand's Theorem

If a sentence is entailed by the KB in FOL, there is a proof involving just a finite subset of the propositionalized KB. Corollary: we can always find that proof, by iteratively deepening the function's nesting depth. Iteration 1: John, ... Iteration 2: Father(John), ... Iteration 3: Father(Father(John)), ... Iteration 4: Father(Father(Father(John))), ... Complete: Any entailed sentence can be proved Unsound: Can't prove that a sentence is not entailed

Deriving Expressions

If given a function in truth table form, to put it into an expression you can sum up all of the Ts in the truth table and then simplify.

Agent Program

Implementation of the "percept-action" mapping f: P* -> A. Updates memory, chooses best action, updates memory. Returns an action.

Solution

In the ideal case, this is a sequence of actions - when the model is deterministic and observable. Other times, it must be a strategy (under imperfect control, you need a contingency plan).

Automation of the Proof

Initial State: KB. Actions: inference rules applied to sentences in KB. Transition Model: KB' <- RESULT(KB, rule) where KB' is KB plus all the inferred sentences. Goal: State (KB) containing the sentence to be proved.

Local Beam Search

Keep track of (k) states rather than just one. It begins with (k) randomly generated state. At each step, all successors of (k) states are generated. Select (k) best successors from complete list. Repeat. It differs from (k) random-restart searches bc random restarts are independent from each other and local beam search passes information among (k) threads. Potential disadvantage: lack of diversity.

State Space Graphs

Mathematical representation of a search problem. Nodes are world configurations, edges lead to successors, and the goal test is a set of goal nodes. Each state occurs only once. Can rarely build the full graph, but conceptually it's a good idea.

Inference Rules

Modus Ponens, And-Elimination, A↔B (A→B) ^ (B→A) (A→B) ^ (B→A) A↔B Multiple examples of applying these in lecture 5b. These are sound but not always complete - if these are inadequate, proof may not be reachable. Unification (GMP) and chaining apply a SET of inference rules, resolution applies a single inference rule.

Problem-Solving Agent

Needs a goal, then aims at reaching that goal, along the way maximizing performance measure. Need to use the right level of abstraction to model the world (like, towns = states).

Reasoning

Needs some form of knowledge representation. In "problem-solving agents", knowledge is baked into the problem statement. Problem: Limited and inflexible, has to anticipate all facts that an agent might need, and bake them into the problem statement.

Admissible

Never overestimates the true cost to reach the goal. If h(n) never overestimates, then neither does f(n). Admissible heuristics can be derived from the exact solution cost of a "relaxed" version of the problem. Given a collection of admissible heuristics, the composite heuristic (uses whichever function that is the most accurate on the node in question) is also admissible. Furthermore, h(n) dominates all of these component heuristics.

First-Order Logic

Ontological commitments: - Objects (wheel, door, body, etc.), functions (colorOf(car)), relations (Inside(car, passenger)), and properties (isOpen(door)). Functions return objects. Relations return true or false - they're predicates. Function: ColorOf(CarA) = BLACK Relation: ColorOfCar(CarA, BLACK) = True Property: IsBlackCar(CarA) = True Ex: "One plus two equals three" Objects: one, two, one plus two, three Relations: equals Properties: --- Functions: plus ("one plus two" is the name of the object returned by passing one and two into the plus function). Ex: "Squares surrounding the Wumpus are smelly" Objects: squares, Wumpus Relation: neighboring Property: smelly Functions: ---

Behavior

Percepts map to actions: f: P* -> A. Ideal mapping: specifies which actions to take at any time. Can be implemented with a look-up table, algorithm, etc.

PAGE

Percepts, actions, goals, environment. How we define an agent.

PDDL

Planning Domain Definition Language. Uses a restricted subset of first-order logic (FOL) to make planning efficiently solvable. State: a conjunction of functionless ground literals. Goal: a conjunction of literals, but may have variables. Examples on Lecture 7b slides. Delete negative (not-ed) literals from the new state, add positive literals. A billion examples on slides - write them down.

Search

Process of looking for a sequence of agent actions that reach the goal. A search algorithm takes a formulated problem as input and returns a solution in the format of an action sequence. Once the solution is found, the actions can be carried out through execution.

Standard Logical Equivalence

Put these on your cheat sheet - on slide deck 5a. Includes DeMorgan's and rules that let you go from one operand to another and back.

Implication

P→Q is true if P is false or if P and Q are true.

Equivalence

P↔Q is true if P and Q are both false or if P and Q are both true.

Deductive Reasoning

Reasoning in which a conclusion is reached by stating a general principle and then applying that principle to a specific case (The sun rises every morning; therefore, the sun will rise on Tuesday morning.)

Quantifiers

Representing collections of objects without enumeration (naming individuals). Ex: all trojans are clever, someone in the class is sleeping. Universal quantification (for all): ∀. Existential quantification (there exists): ∃.

Diagnostic Rule

Rule that infers cause from effect. ∀ y Breezy(y) => ∃ x Pit(x) ^ Adjacent(x, y) Not complete - says nothing about the other squares

Causal Rule

Rule that infers effect from a cause. ∀ x, y Pit(x) ^ Adjacent(x, y) => Breezy(y) Not complete - says nothing about the other squares

FOL Semantics

Sentences in FOL are interpreted with respect to a model (a model contains objects and relations among these objects). Terms refer to objects (e.g., Door, Alex, StudentOf (Paolo) ). Constant symbols refer to objects, predicate symbols refer to relations, function symbols refer to functions. An atomic sentence predicate(term1, ..., termn) is true iff the relation holds between objects term1, ..., termn. A relation is a set of tuples of objects: Parent := {<John, James>, <Mary, James>, etc. ...}.

Recursive Best-First Search

Similar to recursive DFS with depth limit, but uses f-limit to keep track of the total cost of the best alternative path available from any ancestor of the current node. If the cost of the current node (f-cost) exceeds this limit (f-limit), the recursion unwinds back to the alternative path. Trace through on slide deck 4. Space is linear in the depth of the deepest optimal solution. Optimal if the heuristic is admissible. Time complexity is hard to characterize - depends on accuracy of the heuristic function and how often the best path changes.

Agent Types

Simple (stateless) reflex agents, stateful reflex agents (with internal states - without previous state, may not be able to make a decision), goal-based agents, utility-based agents, and learning agents.

Resolution

Single rule that yields a complete inference algorithm. "Either... or...". Resolvent and pivot variable (x): (C V x) ^ (D V !x) -> (C V D). When x is false, the formula is C. When x is true, the formula is D. So regardless, the formula is either C or D. Can be viewed as "transitivity of implication". (¬C →x) ˄ (x →D) = (¬C →D), = ( C ˅ D ). When (D = ¬C), the resolvent (C ˅ ¬C) equals (true) - When (x) ˄ (¬x), the resolvent ( ) equals (false). More examples on Lecture 7a slides.

Syntax

Specifies all well-formed sentences in the KB.

Knuth's Problem

Starting with the number 4, any other number can be reached using a combination of sqrt, floor, and factorial. States: all positive numbers. Initial state: the number 4. Actions: {sqrt, factorial, floor}. Transition model: as given by the mathematical definitions. Goal state: the desired integer.

Simple Reflex Agent

Stateless, without memory. No internal models, act by stimulus-response to the current state of the environment. Complex patterns of behavior may still arise from their actions. Benefits: robustness, fast response time. Challenges: scalability, how intelligent could they actually be.

Theorem

Statement that can be inferred from the KB.

Pattern Database

Stores exact solution cost for every possible subproblem instance. For 8-Puzzle, every possible configuration of the first four tiles and the blank, locations of the other four tiles are irrelevant (but moving them still counts toward the solution cost). Can be constructed by exhaustively searching back from the goal, and recording the cost of each pattern encountered.

Universal Elimination

Substitute a variable with a ground term. ∀ v, alpha SUBST({v/g}, alpha) Ex: ∀ x King(x) ^ Greedy(x) => Evil(x) {x/John}, {x/Richard}, and {x/Father(John)} King(John) ^ Greedy(John) => Evil(John) King(Richard) ^ Greedy(Richard) => Evil(Richard) etc.

Existential Elimination

Substitute a variable with a single new, constant symbol. Ex: x, Crown(x) ^ OnHead(x, John) Crown(C1) ^ OnHead(C1, John) as long as C1 doesn't appear elsewhere in the KB

Utility Function

Sum of the step costs of individual edges. Step cost of an edge would be like Cost( In(Arad), Go(Zerind), In(Zerind) ).

Knowledge-Based Agent

TELL KB what was perceived - insert new sentences (representations of facts) into KB. ASK KB what to do - use reasoning to examine actions and select the best.

DFS

The "deepest" node in the frontier is chosen next for expansion. The frontier uses a LIFO stack. Incomplete unless a graph search is used in a finite state space. Let m be the max depth of any state, number of nodes explored is O(b^m) but only has to store O(bm) nodes at a time - way better than BFS.

Dynamic

The environment is changing during deliberation. Taxi driving, not a crossword, not a poker game, sort of image analysis.

Contours

The values of f(n) are non-decreasing along any path from the initial state - they form these. In Uniform-Cost Search, these just look like circular bands around the start state.

Adversarial Search

There is another agent working against you. The opponent is not predictable - can't be represented by a transition model. Game trees instead of search trees. Solution is not a "sequence of actions" but a contingency plan.

Generalized Modus Ponens

There's a substitution such that SUBST(theta, pi') = SUBST(theta, pi) for all i. Examples on Lecture 7a slides.

Definite Clause

Universally quantified clause in which exactly one literal is a positive literal. We omit the "universal quantifier" in FOL definite clauses, but you should assume that, implicitly, all variables are universally quantified. Apply "Existential Instantiation" to make sure the KB has only universally quantified clauses. Ex: King(x) ^ Greedy(x) => Evil(x) King(John) Greedy(y) ∃ x Owns(Nono, x) ^ Missile(x) Owns(Nono, M1) Missile(M1)

Conjunctive Normal Form (CNF)

Variable (a symbol that is true or false), literal (a variable or its negation), clause (disjunction of literals), CNF formula (conjunction of clauses). Convert to CNF using equivalence formulas. Examples of converting to this on the Lecture 7a slides.

Reverse Turing Test

Vision is particularly difficult for machines. Things like CAPTCHA take advantage of that.

FOL Forward Chaining

When a new fact (p) is added to the (KB), for each rule such that (p) unifies with a premise, if the other premises are known, then add the conclusion to the (KB) and continue chaining. Forward chaining is "data-driven" - e.g., inferring facts from percepts. Sound, because every inference is an application of generalized modus ponens. Complete, because answers every query whose answer is entailed by the KB of FOL definite clauses. If the query is not entailed by the KB, then may never terminate. Examples on Lecture 7a slides.

FOL Backwards Chaining

When a query (q) is asked, if a matching fact (q') is known in the (KB), return the unifier. For each rule whose consequence (q') matches (q), attempt to prove each of its premises by backward chaining. A depth-first search - in any knowledge base of realistic size, many search paths will result in failure. Examples on Lecture 7a slides.

State Space Explosion

When d (depth) is 20, tree search of a grid gives 4^20, or about a trillion, while graph search of a grid gives 4 * 20), or 80 total states.

Equality

term1 = term2 is true under a given interpretation iff term1 and term2 refer to the same object.

Universal Quantification

∀ <variables> <sentence> ∀ P means the conjunction of all instances of P. Implication is the natural connective to use with ∀. Common mistake: using ^ with ∀. "Everyone in 360 is smart": ∀ x In(cs360, x) -> Smart(x) Mistaken approach: ∀ x In(cs360, x) ^ Smart(x) actually means "everyone is in 360 and everyone is smart".

Quantifier Properties

∀ x ∀ y = ∀ y ∀ x ∃ x ∃ y = ∃ y ∃ x ∃ x ∀ y does NOT = ∀ y ∃ x ∃ x ∀ y Loves(x, y) - there is someone who loves everybody ∀ y ∃ x Loves(x, y) - everybody is loved by at least one person

Existential Quantification

∃ <variables> <sentence> ∃ P represents the disjunction of all instantiations of P. And (^) is a natural connective to use with ∃. Common mistake: using -> with ∃. "Someone in the cs360 class is smart": ∃ x In(cs360, x) ^ Smart(x) Mistaken approach: ∃ x In (cs360, x) -> Smart(x) actually means the formula would be true if there exists someone noT in 360 (false entails true).

Rational Action

The action that is expected to maximize the performance measure given the evidence provided by the percept sequence to date and whatever built-in knowledge the agent has. Rational is BEST (yes, to the best of its knowledge) and OPTIMAL (to the best of its ability), but is not always omniscient or successful.

State

A configuration of an environment / agent.

Node

A data structure that constitutes part of a search tree (may have fields like parent, action, path cost, depth, etc.).

Architecture

A device to execute the agent program (general-purpose computer, robot, etc.).

Robot Navigation

A generalization of the route-finding problem. Rather than following a discrete set of routes, a robot can move in a continuous space with an infinite set of actions and states.

Abductive Reasoning

A precursor to deductive and inductive thought; the process of developing a hypothesis or a "hunch" based on a limited amount of information. "All humans are mortal" & "Socrates is a mortal": "Socrates may be a human"

Intelligence

Acting humanly, thinking rationally, acting rationally, or thinking humanly? We will define it as acting rationally.

Zero-Sum Game

A situation in which one person's gain is another's loss and all scores sum to zero - winning would be +1/2 or 1, losing would be -1/2 or 0, and a tie would be 0 or 1/2.

Intelligent Agent

A software system (AI agent) that can act upon its environment and sense the environment. Uses a performance measure. Could be as simple as a thermostat. Anything that can be viewed as perceiving its environment through sensors (percepts) and acting upon that environment through its actuators to maximize progress towards its goals.

Minimax

A strategy of game theory employed to minimize a player's maximum possible loss. Perfect play for deterministic environment with perfect information. Basic idea: Choose move with highest minimax value = best achievable payoff against perfect opponent. Generate the entire game tree, determine utility of each terminal state, back propagate utility values in tree to compute minimax values, at the root note, choose move with highest minimax value. Know how to do this!! Max wants to maximize MAX's utility, Min wants to minimize MAX's utility. On lecture 4b slides. For a three player game, we'd need a vector of 3 values. Complete and optimal, time complexity is O(b^m), space complexity is O(bm). The worst-case exponential time complexity is unavoidable, but alpha-beta pruning can help.

Turing Test

A test proposed by Alan Turing in which a machine would be judged "intelligent" if the software could use conversation to fool a human into thinking it was talking with a person instead of a machine. An imitation game - an operational definition of intelligence. Requires natural language processing (to communicate), knowledge representation (to store and retrieve information), automated reasoning (using stored knowledge to answer questions and draw new conclusions), and machine learning (adapt to new experiences and extrapolate patterns).

Traveling Salesman Problem

A touring problem where each state is visited exactly once and the aim is to find the shortest path.

Inductive Reasoning

A type of logic in which generalizations are based on a large number of specific observations. "X/Y/Z is a human" & "X/Y/Z is mortal": "All humans must be mortal".

Environment Types

Accessible vs. inaccessible. Deterministic vs. nondeterministic. Episodic vs. non-episodic. Hostile vs. friendly. Static vs. dynamic. Discrete vs. continuous.

Reflex Agent with State

Accounts for some factors in addition to the percepts from the environment.

NP-Complete

An NP problem for which any other NP problem can be reduced to. If you can solve one of them in polynomial time, you can solve them all in polynomial time.

Episodic

An episode is each percept and action pair. The quality of the action does not depend on the previous episode (like the screening of defective parts on an assembly line - only the current part is looked at). Image analysis.

Interacting Agents

An obstacle-avoidance may share traits with a lane-keeping agent, but goals and environments and some other things may be different.

NP-Hard

At least as hard as the hardest problem in NP. Every problem in NP can be reduced to this problem and the reduction process itself takes polynomial time. Many AI algorithms are this, so they're exponential.

Graph Search

Avoids the problems of tree search. Keeps track of the "explored set", remembering visited nodes to avoid expanding them again - can be implemented with a hashtable, if two states are equal use them to compute the hash key. Pseudocode on second slide deck.

Iterative Deepening Search

Combination of BFS and DFS. Complete and optimal, like BFS, and O(bd) space requirement, like DFS. Limit starts at 0 and increases each time. Time complexity is O(b^d). There is some extra cost for generating upper levels multiple times, but the extra cost is not large. The preferred uninformed search for many AI applications (when the search space is large and the depth of the solution is not known).

Simulated Annealing

Combines "hill climbing" with "random walk" in a way that yields both efficiency and completeness. The smaller (T) is, the closer the exponent is to -∞, and the closer the probability is to 0. Pseudocode on slides.

Hill-Climbing Search

Continually moves in the direction of increasing value (steepest-ascent version). Successors of a state are generated by moving a single queen to another square (for 8-queens, 8*7 = 56 successors). Heuristic cost function for 8-queens could be something like minimizing the number of attacking pairs. Difficulties: local maxima (a peak that is higher than each of its neighboring states, but lower than the global maximum), ridges (sequence of local maxima), plateau (a flat area of the state space landscape). Variants: stochastic (choose at random among the uphill moves, with probability varying depending on the steepness), first-choice (same as "stochastic hill climbing" but choose the first successor that is better than the current state), random-restart (conduct a series of hill-climbing search from randomly generated initial states - surprisingly good, if each restart has a probability p of success, number of restarts is 1/p, 8-queens would have 14% success rate).

Applications of AI

Data mining, deep learning, search engines, machine translation, routing, self-driving cars, walking robots, games (chess, go, AlphaGo), video games.

Priority Queues

Data structure used to keep track of the frontier - can be LIFO, FIFO, or an actual priority queue.

Alpha-Beta Pruning

Don't explore branches of the game tree that cannot lead to a better outcome (than those that have been explored). Know how to do this!! The value of the root is independent of pruned leaves that don't matter. Alpha = value of the best (highest-value) choice for MAX that has been found so far - initialized to -∞, and then keeps increasing. Beta = value of the best (lowest-value) choice of MIN that has been found so far - initialized to +∞, and then keeps decreasing. Trace through on lecture 4b slides. If Player has a better choice, either at the parent node of n, or at any choice point further up, then n will never be reached in actual play. Pruning does NOT affect the final result of Minimax. Effectiveness of pruning is highly dependent on the order in which states are examined. Worst case: no improvement at all. Best case: O(b^m/2). α is the value of the best choice (found so far at any choice point along the path) for MAX - if (v) is worst than α, MAX will avoid it (i.e., prune the branch). Beta is defined similarly for MIN. Minimax generates the entire game search space. Alpha-beta pruning can avoid large parts of it, but still has to search all the way to at least some terminal states. If terminal state is too deep, use the evaluation function for a cut-off state - estimate the expected utility.

Look-Up Table

For an obstacle avoidance agent, could map proximity to action. 10 = no action 5 = turn 30 degrees left 2 = stop How do we generate this? How large should it be? How should we choose an action? If our collision agent has three proximity sensors {close, medium, far)^3 and a steering wheel {left, straight, right} and a brake {on, off}, then our lookup table would be the magnitude of each grouping multiplied together (3 * 3 * 3 * 3 * 2).

Agent Design Process

Formulate, search, execute.

Goal-Based Agent

Goal information is needed for this agent type to make a decision.

Obstacle Avoidance Agent

Goal: avoid running into obstacle. Percepts: distance. Sensor: proximity sensing. Effectors: steering wheel, brake. Action: stop, turn, no action. Environment: freeway.

Completeness

Guaranteed to find a solution when there is one.

Optimality

Guaranteed to find the optimal solution.

Time Complexity

How long does it take to complete the search? 4x is linear, 4^x is exponential in terms of x. A polynomial-time solvable problem is considered to be a "tractable" problem, whereas a exponential time solvable problem is considered to be "intractable". The time taken to come up with a solution is usually longer than the time required to check if it's valid.

Local Search

If you just want the goal state and not the path to the goal state, this is a good alternative. Operate using a single, current node (rather than paths) and move only to neighbors of that node. Uses little memory and often finds reasonable solutions in large and infinite spaces. Local optimal vs. global optimal (overall best option), maximal vs. minimal.

Wumpus World

Illustrating unique strength of "knowledge-based" agents. A cave consisting of dark rooms, on a 4x4 grid. Agent can move Forward, Turn Left, or Turn Right. Moving Forward into a wall does not change location. Beast (named Wumpus) hidden in one room. Agent will be eaten if walks into that room. Wumpus can be shot by the agent, but the agent has only one arrow. Pits hidden in some rooms. Agent will die if it walks into these rooms. Senses are stench, breeze, glitter, bump, and scream. Performance measures: 1000 for winning and coming out with gold, -1000 for falling in a pit or being eaten, -10 for using up the arrow, etc. Deterministic, not accessible, static, discrete, episodic. KB must keep track of where newly discovered things are / may be. You can't play using search alone. If KB is correct, correctness is guaranteed. ∀ y Breezy(y) => ∃ x Pit(x) ^ Adjacent(x, y) ∀ x, y Pit(x) ^ Adjacent(x, y) => Breezy(y) ^ Neither are complete - say nothing about the rest of the squares. Breezy predicate: ∀ y Breezy(y) <=> [∃ x Pit(x) ^ Adjacent(x, y)]

Logic

Language for knowledge representation - can be propositional logic (boolean) or first-order logic (FOL). Can combine and recombine information to suit many purposes.

A* Search

Minimize the total estimated cost. At each step, expand the node with the lowest value of total estimated solution cost: f(n) = g(n) + h(n). Pseudocode is identical to Uniform-Cost Search except it uses (g+h) instead of (g) to set the priority queue. Avoid expanding along paths that are known to be expensive. Complete (if there's a solution, A* will find it) and optimal (A* will find the best path). Heuristic has to be admissible. Time complexity is exponential at worst, space complexity keeps all nodes in memory. Also optimally efficient - no other optimal algorithm will expand fewer nodes. Expands no nodes with f(n) > C*, where C* is the cost of the optimal path. Usually runs out of space before it runs out of time - solution is a memory-bound heuristic.

Deterministic

Next state is determined by current state and the action. A crossword, image analysis.

Best-First Search

Node is chosen for expansion based on some evaluation function f(n) - returns an estimated total solution cost at node n. Pseudocode is identical to uniform-cost search except that the priority queue is ordered by an estimated combined path cost f(n) = g(n) + h(n), not just the actual path cost accumulated so far. To make this optimal and complete, we need to use f(n) = g(n) + h(n).

NP

Nondeterministic polynomial. if you can guess the solution and then, in polynomial time, check if the guess is a valid solution. If you have a nondeterministic Turing machine, you can solve the problem in polynomial time. If you have a deterministic Turing machine, you may need exponential time.

Conflict Resolution

Performed by Action-Selecting Agents. Can override (one agent fully takes over), arbitrate (it depends, collision-avoiding agent may override a lane-keeping agent if an obstacle is close, otherwise the LKA will take over), or compromise (take action that satisfies both agents).

Genetic Algorithm

Raising the granularity of search. Move "3 queens per step" instead of "1 queen per step". Crossover: create a state by randomly combining components of two states. Mutation: randomly change the position of a queen. Crossover + mutation + selection. Fitness of states: use most promising states for "cross-over" and "mutation" -- they form a population. Pseudocode on slides.

Rational Agent

Rational behavior is doing the right thing (expected to maximize the performance measure). Most general overview of AI because it includes correct inference (laws of thought), uncertainty handling, resource limitation considerations, and cognitive skills (NLP, AR, ML, knowledge representation, etc.). Goal of rationality is more defined than the goal of acting humanly.

Touring

Related to route-finding, but we want to visit each city (state) at least once. For a set of twenty states, the powerset has 2^20 states. <In(Bucharest), Visited(..., ..., ..., ... )>, for example.

Simplified Memory-Bound A* Search

SMA* starts just like A*, expanding the best leaf until memory is full. At this point, it adds a new node to the search tree only after dropping an old one (the worst leaf node, with the highest f-cost). Complete if there is any reachable goal within the memory bound (that is, the depth of the shallowest node is less than the memory size). Optimal if any optimal solution is reachable within the memory bound, otherwise returns the best reachable solution.

Total Cost

Search cost (like time taken to find a solution) + solution cost (like cost of paths to the solution).

Accessible

Sensors give access to complete state of the environment. A crossword, image analysis.

Vacuum World

States: location, RoomA, RoomB. Initial state: any state. Actions: {left, right, suck}. Transition model: RESULT(<A, dirty, clean>, Suck) = <A, clean, clean>, ... . Goal test: <A, clean, clean>, <B, clean, clean>. Path cost: each step costs 1. State space has 8 nodes, could draw it out if you wanted. On slides, second deck.

Route-Finding

States: location, current time, traffic. Initial state: specified by user's query. Actions: take any flight from the current location, in any seat, leaving enough time for getting to connecting flights if necessary. Transition model: state resulting from taking a flight will have the flight's destination as the current location and the flight's arrival time as the current time. Goal test: whatever place the user wanted to go. Path cost: depends on what user wants to focus on - cost, time, time of day, etc.

8-Puzzle

States: locations of the nine tiles. Initial state: any state. Actions (moving blank tile): up, down, left, right. Transition model: RESULT( <7,2,4,5,B,6,8,3,1>, Left ) = <7,2,4,B,5,6,8,3,1>. Goal state: { <B, 1, 2, 3, 4, 5, 6, 7, 8> }. Path cost: each step costs one.

BFS

The "shallowest" node in frontier is chosen for expansion. The frontier is implemented using a FIFO queue. Complete: If the shallowest goal node is at some finite depth, d, then BFS will eventually find the goal node. Optimal: If the path cost is a non-decreasing function. Pseudocode on slides. Total nodes generated and time taken is O(b^d) - memory is usually the limiting factor.

Uninformed Search

The agent has no idea, among the possible actions, which action is better (like if the agent doesn't have the whole map). All the agent can do is generate successor states and check if a state is a goal state. Graph of the time and space complexity of each at the end of third slide deck.

Informed Search

The agent has some idea, among the possible actions, which action is better (like if we know the straight-line distance to each city from the one we're at).

Heuristic Function

The estimated cost of the cheapest path from the state at node n to a goal state. Are not derived from the problem itself but from some additional knowledge. Has to be admissible. Consistency is sufficient for achieving optimality while applying A* to graph search. Consistent if for any successor n' of node n generated by action a we have h(n) <= c(n, a, n') + h(n'). Underestimation is fine (and encouraged). For 8-Puzzle, the A* Search branching factor is < 3, average solution cost is 22 steps - exhaustive tree search would be 3^22 states, graph search would be 181440 states, Manhattan distance is a good heuristic. Domination translates into efficiency - if always h2(n) >= h1(n), h2 will never expand more nodes than h1. When h(n)=0, as in the Uniform-cost Search, g(n) is only bounded by C* (i.e., no guidance from goal at all). Can learn heuristics from experience - inductive learning of a heuristic function h(n) in terms of features x1(n) and x2(n): h(n) = c1x1(n) + c2x2(n). Candidate features: "number of misplaced tiles" or "number of pairs of adjacent tiles that are not adjacent in goal state". Good heuristics can be constructed by (1) relaxing the problem definition, (2) storing precomputed solution costs for subproblems in a pattern database, or (3) learning from experience.

Uniform-Cost Search

The shallowest node is the node with the lowest path cost. Always expands nodes in order of their optimal path cost, because the first goal node selected for expansion must be the optimal solution. Two differences from BFS: goal test is applied to a node when it's selected for expansion (not when it is first generated as in BFS), and replace a node in the frontier if a better path (to the same state associated with a node) is found. Look at example on slides and be able to trace it (3rd deck). Potential pitfall: negative or zero circles will create infinite loop. Completeness is guaranteed "only if" the cost of every step exceeds some small positive constant epsilon. Does not depend on "d" (the depth); instead, let's use C* (path cost of the optimal solution): O(b^(1 + floor(C*/E)). When [C*/ε] = d, the time complexity becomes similar to that of BFS: O(b^(d+1)).

Bidirectional Search

Two searches: one forward from the root and one backward from the goal. Goal test replaced with frontier intersect test. Requires a method of computing predecessors. Time is b^(d/2) + b^(d/2) which is way better than b^d. Computing "predecessors" may not be easy - the notion of "reversible actions" may not exist in the application, and even if actions are reversible, there may be many predecessors.

Backtracking Search

Variant of DFS. Only generates one successor at a time, using up far less memory. O(m), in comparison to DFS which is O(bm) and BFS which is O(b^d).


संबंधित स्टडी सेट्स

Developmental Psychology (Ch. 9)

View Set

PSYCHOLOGY CHAPTER 15: Psychological Disorders

View Set

Prevention and Care of Injury Exam #3

View Set

Spelling Lesson 3: Difficult Word Pairs 805

View Set

GEOL Chp 13 The Atmosphere In Motion

View Set