Artificial Intelligence Exam Study Guide

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Map Coloring Problem

(See picture of chapter 6 slide 4)

Wumpus World Performance measure

+1000: exiting cave with gold. -1000: Wumpus death or bottomless pit. -1: each action taken. -10: using arrow. end: exit cave or die.

Wumpus World Environment

4x4 arrangement of rooms. agent starts in [1,1] facing right. gold & wumpus locations have uniform random distribution (other than start square). pit: any square other than start, probability=0.2

Searching with no observation

A search performed without sensors. Like in the vacuum cleaner world. Even though you don't have sensors, you know things based on past recorded knowledge. Move left, vacuum, move right, vacuum (don't have to know anything but your past direction).

Goal-based Agents

Adds goal to model-based reflex agent.

Breadth-First Search

All nodes expanded at a given depth before their successors.

Explored set

All nodes have been expanded.

State space

All of the potential node states possible. (e.g. for chess there are millions of possible positions of each chess piece on the board each representing a state.)

Autonomous agent

An agent is autonomous if its behavior is determined by its own experience.

Simulated Annealing

An algorithm that simulates metalurgic annealing. The algorithm starts at a high "temperature" allowing for the acceptance of worse solutions that the current for exploration. As the algorithm "cools", the number of worse solutions begins to decrease until the algorithm is only focusing on the best solutions again. The "temperature" slightly increases and slightly decreases many times while overall decreasing.

Uninformed Search

An uninformed search is a sort of bruteforce search. It generates successors, and distinguishes goals from non-goals based on the order nodes are expanded. Examples of uninformed search: BFS and DFS.

Agent

Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

Backtracking

Assume partial assignments. Check if assignment is possible, backtrack if not.

Wumpus World CNF

B(2,2) -> P(1,2) v P(2,1) v P(3,2) v P(2,3) => B(2,2) -> P(1,2) B(2,2) -> P(2,1) ... ~B(2,2) -> ~P(1,2) ^ ~P(2,1) ^ ~P(3,2) ^ ~P(2,3)

Uniform cost

Breadth-First search with priority queue. Modified breadth-first search that expands least cost node.

Arc Consistency

CSP variable is arc consistent if variable's binary constraints are met by every element of domain. Graph is arc consistent if every variable is arc consistent with every other variable.

Wumpus World

Cave of rooms connected by passageways. Wumpus eats anyone entering its room. Agent can shoot wumpus with arrow, but agent has just one arrow. Bottomless pits in some rooms. Wumpus is too big to fall in pits (they can coexist in a room). Gold for taking some room.

Preference constraint

Constraint based on human preference. For example, if professor A prefers morning classes while professor B prefers afternoon classes. The objective function would recieve a greater reward for the greater satisfation it provides each of the professors.

Global Constraints

Constraint placed on all domains of a problem. For example: If the total number of people allowed on all flights in a day at an airport is 1,000, the capacity of the flights is irrelevant if the global capacity allowed has been reached.

Alpha-beta pruning

Decreases the number of nodes evaluated by the minimax algorithm by pruning parts from consideration.

Depth-Limited Search

Depth-First Search with a limit on the depth. If the goal is deeper than the limit, the search will be incomplete. If the goal is shallower than the limit, the search will be non-optimal.

Inference

Deriving new sentences from old. (like a proof in geometry)

Environmental Dimensions - Deterministic vs. stochastic

Deterministic: next state of the environment is completely determined by the current state and the action executed by the agent. Stochastic: Anything not deterministic; real situations typically too complex to keep track of all unobserved aspects.

Environmental Dimensions - Discrete vs. continuous

Discrete: A limited number of distinct, clearly defined precepts and actions (chess) Continuous: Infinite range of values (taxi)

Environmental Dimensions - Episodic vs Sequential

Episodic: The agent's experience is divided into atomic "episodes", each consisting of the agent perceiving and then performing a single action, and the choice of action in each episode depends only on the episode itself (rejecting parts on an assembly line). Sequential: Current decision could affect all future decisions; agent needs to think ahead (chess, taxi).

Genetic Algorithms

Fittest algorithms are chosen for reproduction.

Rational Agent

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

Environmental Dimensions - Fully observable vs. partially observable

Fully: An agent's sensors give it access to the complete state of the environment at each point in time. Partially: Inaccurate sensors or missing data. Unobservable: no sensors.

State-Space Landscape

Graph of the state of a function over time. Heuristic cost function: Want global minimum Objective function: Want global maximum Parts of graph: Flat: shoulder Highest point: global maximum Relative highest point: local maximum Circle: Current state

Performance measure

How we evaluate an agents behavior

A* Search

Informed search that uses (the cost to reach the next node + the code to reach the goal node) to estimate the cost of the cheapest path through the graph.

Greedy Best-First Search

Informed search using heuristic as evaluation function.

Problem Components

Initial State: that agent starts in; ex: In(Jonesboro) Actions: descriptions of those available to agent; ex: Go(Paragould), Go(Harrisburg), Go(Hoxie) Transition Model: description of action consequences; ex: Result (In(Jonesboro), Go(Paragould)) = In(Paragould) Goal Test: can be explicit set of states or abstract (checkmate); ex: In(Memphis) Path Cost: function assigning numeric costs, reflecting performance measure (distance); ex: map mileages

Environmental Dimensions - Known vs Unknown

Known environment: all sensors and actuators are known and understood. Unknown environment: not all sensors and actuators are known and understood.

How does local beam search differ from k random restarts?

Local beam search preforms a search with k random states. It looks at all the successors and keeps the best k of them. k random restarts on the other hand does not look at the successors, but instead just completely restarts.

Zebra Puzzle

Logic puzzle requiring deduction through assumption of difference constraints and evaluating whether each is possible.

Utility-based Agents

Makes decisions depending on how "happy" it will make the agent. Choose among several goals with uncertain achievability by weighing probability against importance.

Simple reflex agents

Only works if the environment is fully observable. Makes decisions based on what it percieves about the environment and a basic set of rules.

Vacuum-cleaner world

Percepts = {[A, Dirty], [A, Clean], [B, Dirty], [B, Clean]} Actions = {Left, Right, Suck, NoOp}

Example - PEAS for Automated Taxi Driver

Performance Measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard

Where does AI come from?

Philosophy, Mathematics, Economics, etc. AI comes from a lot of subjects.

LaGrange point

Point in space where gravity cancels out. A point between earth & moon where gravity is equal. It's a lot closer to moon than the earth because of size differential.

Unexplored set

Potential nodes that are unexpanded/haven't been explored.

Frontier

Results of current or previous expansions. Frontier separates explored and unexplored regions of state-space graph.

Search Tree

Root (initial state), branch (action), node (state). If a search tree allows for returning to a previous state it could lead to an infinite search tree.

Hill-Climbing Search

Search algorithm that always moves uphill. Terminates at peak.

Local Search Algorithms

Search that assumes the path to the goal is not always apart of the solution.

Depth-First Search

Search that recursively visits next deepest unexplored node in tree.

Informed Search

Search using problem specific knowledge to speed up the search. Heuristic: Problem specific knowledge used.

CNF - Conjunctive Normal Form

Set of boolean logic sentences with only and's and or's, that have not sub-sentences.

Knowledge Base

Set of sentences, assertions about world.

What are the four agent types?

Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents

Environmental Dimensions - Single agent vs. multiagent

Single agent: operates in environment alone: crosswords. Multiagent: operates in same environment with another multiagent: competitive multiagent: chess, taxis (parking) cooperative multiagent: taxis (collision avoidance)

PEAS

Stands for Performance measure, Environment, Actuators, Sensors

Example problem components for Vacuum World

State: agent location & dirt location Initial State: any Actions: left, right, suck Transition Model: apparent; note no effect of move left from left or right from right or suck in clean square Goal Test: all clean? Path cost: # steps in path (since each step costs 1)

Example problem components for 8-Queens

States: arrangements of 0 to 8 queens Initial State: 0 queens Actions: add queen Transition Model: board w/ new queen Goal Test: 8 queens, none attacked

Example problem components for 8-Puzzle

States: locations of each of eight # tiles Initial State: any Actions: L, R, U or D movement of blank space Transition Model: L(5,b,6) = (b,5,6) Goal Test: match with desired state (e.g., ordered #s) Path Cost: # steps (each costs 1) in path

Environmental Dimensions - Static vs Dynamic

Static: The environment is unchanged while an agent is deliberating. (crossword puzzle) Dynamic: not static; agent constantly being asked what to do; not having made a decision yet counts as deciding to do nothing. (taxi) Semidynamic: Environment itself does not change with the passage of time but the agent's performance score does (chess with a clock)

Wumpus World Sensors

Stench percieved in rooms with wumpus or adjacent Breeze percieved in rooms adjacent to pit Glitter percieved in room with gold Bump percieved when running into wall Scream percieved in all rooms when wumpus is killed

Belief State

The possible different states when entering into a problem without sensors. (see picture of chaper 4 slide 31)

Environment

The world in which an agent lives

Node components

There are four node components: node's state in state space, node's parent that generated this node, action applied to parent to generate this node, and path cost from initial node.

Model-based reflex agents

To handle partial observability, agent maintains internal state to keep track of the part of the world not seen now. Agent program encodes two knowledge types: - how world evolves independent of agent (e. g., other cars) - how agent's actions affect world (effects of actions: drive-> move; turn wheel -> change direction) Knowing about the current state of the environment is not always enough to decide what to do.

Min-Max Game

Tree with the braches each with three nodes. Two player game where first player wants the largest amount possible chosen. The second player wants the smallest amount possible chosen. The first player should choose the branch with the largest minimum since the second player will choose the smallest node.

Bi-Directional Search

Two simultaneous searches: froward from initial state and backward from goal state. Goal: Intersection of frontiers.

Learning agents

Uses feedback on how agent is doing and determines how to modify peformance element to do better in future.

Actuators

What an agent uses to act upon it's perceived environment.

Sensor

What an agent uses to percieve it's environment. (e.g. 5 senses)

Boolean Logic

^ = and v = or ~ = not -> = therefore <-> = a -> b ^ b -> a a -> b = ~a v b a -> b = ~b -> ~a ~(a v b) = ~a ^ ~b ~(a ^ b) = ~a v ~b

Wumpus World Actuators

move forward (into wall results in no move) turn 90 degress left or right death in room with live wumpus or pit grab: pick up gold in room with agent shoot: fire arrow in direction agent facing; arrow continues to wumpus or wall; only 1st shoot has effect climb: exit cave from square [1,1]

Search

seeking sequence of actions that reaches goal.

Solution

set of variable values satisfying constraints.

How to represent map coloring problem?

undirected graph for visual 7x7 adjacency matrix for data structure (2 dimensional array with 7 nodes each, first dimension = source, second dimension = destination)


Set pelajaran terkait

Inflammatory Intestinal Disorders and Liver Problems

View Set

America's First Constitution (Articles of Confederation)

View Set

Chapter 2: Theory, Research, and Evidence-Informed Practice

View Set