TDT4136: Classical Search

Ace your homework & exams now with Quizwiz!

Iterative Deepening A*

Uses f-cost as limit. Each iteration uses smallest f-cost that exceeds last iteration

Path cost

A function that assigns a numeric cost to each path. Step cost is the cost of moving from a state by an action to its successor.

Search Tree

A graph with the initial state as the root, actions as branches, and successor states as nodes.

Admissible Heuristic

A heuristic that never overestimates the cost to reach the goal.

Leaf Node

A node with no children

relaxed Problem

A problem with fewer restrictions on the actions.

Repeated State

A state that is exactly the same as a state previously expanded. Lead to by a loopy path.

Breadth-First Search

A strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors etc. Complete if goal is at finite depth. Optimal if path cost is non decreasing. Not practical considering time and space.

Consistency

AKA monotonicity. The cost of every successor of a node by an action is not great than the step cost from the successor plus the cost of reaching the goal from the successor.

Uninformed Search

Algorithms that are given no information about the problem other than its definition.

Informed Search

Algorithms that are given some guidance on where to look for solutions or knows if a state is more promising than another. uses problem-specific knowledge beyond the definition of the problem

Problem

Consists of Five Components: 1. Initial State 2. Actions 3. Transition Model 4. Path Cost

Generating

Creating a new set of states by performing an action on a given node.

Goal Test

Determines if a given state is a goal state.

Optimality

Does the strategy find the optimal solution?

SMA*

Drops the node with the highest f-value when memory is full. Backs up forgotten node f-value to its parent.

Pruned

Eliminating possibilities from consideration without having to examine them

Heuristic Function

Estimated cost of the cheapest path form the state as node to a goal state.

A* Search

Evaluates nodes by combining cost to get to node and the estimated cost to get from node to goal. Estimated cost of getting to goal through node. Complete and optimal. Tree is optimal if admissible. Graph is optimal if consistent

Best-first search

Expands nodes base on evaluation function. Use priority queue with estimated cost.

Depth-first search

Expands the deepest node. Implemented using a LIFO queue. Not optimal. Low space complexity.

Uniform Cost Search

Expands the node n with the lowest path cost (if all step costs are equal, this is identical to a breadth-first search)

Greedy Best First Search

Expands the node that is closest to the goal. Incomplete even if finite

Uniform-cost search

Expands the node with the lowest path cost. Done by storing the frontier as a priority queue. Goal test applied during expansion not generation. Expands nodes in order of optimality. Complete as long as every step cost is greater than some small positive constant.

State Space

Initial state, Actions, Transition Model. Represented by a graph. A path though this is a sequence of states connected by a sequence of actions.

Completeness

Is the algorithm guaranteed to find a solution when there is one.

Branching Factor

Maximum number of successors to any node

Redundant Paths

More than one way to get from one state to another.

Execution

Performing the action sequence recommended by a solution.

Actions

Possible moves that an agent can make in its current state. They are applicable given the state

Subproblem

Problem definitions with a more general goal than the original problem definitions

Abstraction

Removing details from a representation.

Search Strategy

The the search algorithm's decision of which node to next.

Bidirectional Search

Two searches: one forward form the root and one backward form the goal. Goal test replaced with frontier intersect test. Requires a method of computing predecessors.

Recursive best-first search

uses f-limit variable to keep track of best alternative path from any ancestor of the current node. Replaces f-value with best f-value of nodes children.

Touring Problem

visit every node (The Traveling Salesperson problem)

Exploration Problems

when the states and actions of the environment are unknown the agent must act to discover them.

Problem Formulation

which actions and states to consider given a goal

Iterative Deepening Depth First Search

depth first search but when all the nodes have been expanded and no solution found the depth limit is increased.

IDA*

difference between IDA* and Iterative deepening is that rather than using the depth as the cutoff it uses f(n) = g(n) + h(n)

A* Search

f(n) = g(n) + h(n) where g(n) is the cost to reach the node and h(n) is the estimated cost to get from the node to the goal

Uniformed

given no information about the problem other than its definition

Admissible Hueristic

h(n) never overestimates the cost to reach the goal

Sensorless Problems

if the agent has no sensors at all, then it could be in one of several possible initial states and each action might therefore lead to one of several possible successor states

Contingency Problems

if the environment is partially observable or if actions are uncertain, then the agent's percepts provide new information after each action. Each possible percept defines a contigency that must be planned for.

Recursive best-first search

is a simple recursive algorithm that attempts to mimic the operation of standard best-first search but using only linear space.

Route Finding Problem

is defined in terms of specified locations and transitions along links between them

n-puzzle

object is to reach a specified goal state, such as the one shown on the right of the figure

Abstraction

process of removing detail from a representation is called abstraction

Breadth-First Search

root node is expandd first, then all the successors of the root node are expanded next and so on. Expands the shallowest unexpanded node

Depth-Limited Search

same as depth first search but limit the maximum depth allowed (not useful unless the maximum possible depth can be determined)

Informed

search algorithms that have some idea of where to look for solutions

Straight Line Distance

shortest distance between two points.

Triangle Inequality

stipulates that each side of a triangle cannot be longer than the sum of the other two sides

Search Algorithm

takes a problem as input and returns a solution in the form of an action sequence (formulate, search, execute)

Goal Test

test which determines if a given state is the goal state

Initial State

the state that the agent start in

Step Cost

the step cost of taking action a to go from state x to state y is denoted by c(x, a ,y)

Greedy Best First Search

tries to expand the node that is closest to the goal

Bidirectional Search

two simultaneous searches, one from the initial state and from the goal state

Goal

A desired outcome that is used to limit what an agent can and will do.

5 components of a problem description

1. Initial state 2. Description possible actions 3. Transition model 4. Goal test 5. Path cost

Depth-limited search

A depth first search with a predetermined depth limit. Incomplete if limit is less than depth. Not optimal if limit is greater than depth.

Iterative Deepening Search

A depth limited search that gradually increases the limit. Generates states multiple times.

Backtracking Search

A depth-first search that only generates one successor at a time.

Transition Model

A description of what each action does.

Problem Solving Agent

An agent that uses atomic representation to solve problems.

Planning agent

An goal-based agent that uses factored or structures representations.

Expanding

Applying Legal actions to the current state.

Goal formulation

Based on the current situation and the agent's performance measure. The first step of problem solving.

Time Complexity

How long does it take to find a solution?

Space Complexity

How much memory is needed to perform the search?

Open-loop

Ignoring the percepts received from an environment because the agent already knows what will happen based on the solution.

SMA*

Simple memory bound A* expands the best leaf until memory is full at which time it drops the worst leaf

Atomic Representation

States of world consider whole with no internal structure visible to the problem solving algorithm.

Pattern Database

Storing the exact solution cost all possible solutions to subproblem instance.

disjoint pattern database

The combination of two or more subproblem pattern databases which includes all of one and all the solutions that involve moves of the others.

State Space

The initial state and the successor function implicitly define the state space (all possible states from the initial state)

Branching Factor

The maximum number of successors of any node.

Depth

The number of steps along the path from any node.

Diameter

The number of steps to reach any other state from the current state.

Solution

The output in the form of a action sequence of a search algorithm that takes a problem as input. A path from the initial state to the goal state by actions through the state space.

Total Cost

The path cost plus the search cost which is the time complexity but can including the space somplexity.

Optimal Solution

The path with the lowest path cost among all solutions.

Parent Node

The predecessor(s) of a given node

Problem Formulation

The process of deciding what actions and states to consider, given a goal.

Search

The process of looking for a sequence of actions that reaches the goal.

Successor

The result states reachable from a given state by performing an action.

Frontier

The set of all leaf node available for expansion at a given point. As known as open list.

Explored Set

The set of all states that have been expanded. AKA closed list.

Initial State

The state the agent starts in.

Child Node

The successor(s) of a given node

Successor Function

a description of possible actions available to the agent. The successor function, given a state, returns <action, successor>

Consistency (Monotonicity)

a huerisitic h(n) is consistent if, for every node n and every successor n' of n generated by an action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n'

Pruned

a subtree can be ignored or pruned because h(n) is admissible

Depth First Search

always expands the deepest node until the node has no successor

Path Cost

assigns a numeric cost to each path

Goal Formulation

based on the current situation and the agent's performance measure, is the first step in problem solving.

Measuring Problem Solving Performance

completeness, optimality, time complexity, space complexity

Problem Solving Agent

decides what to do by finding sequences of actions that lead to desirable states


Related study sets

Eating and Feeding Problems in the Older Adult

View Set

Social Studies Quiz 5/13/21 Byz, Eur, China and Japan

View Set

Chapter 1 Quiz, Chapter 10 Quiz, Chapter 9 Quiz, Chapter 8 Quiz, Chapter 7 Quiz, Chapter 6 Quiz, Chapter 5 Quiz, Chapter 4 Quiz, Chapter 3 Quiz, Chapter 2 Quiz

View Set

Real Estate Finance: Chapter 7 Junior Loans in Real Estate Finance

View Set

human resource final exam study guide

View Set

T3 #9: Ch 4 & 5 Texture, Pattern, Space, Time, & Motion and Renaissance cont.

View Set

WILLS TRUST PROBATE Chapter 2 LGLA 1353

View Set

Prep-u ch 24 - Nursing Management of the Newborn at Risk: Acquired and Congenital Newborn Conditions

View Set