AI Final Theory

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Heuristic search

- Heuristic search is a type of algorithm that is used to find the best solution to a problem by using a heuristic: a function used in search algorithms to assess and prioritize options at each branching step. -It relies on the information at hand to determine the most promising path to pursue

Beam Search

- a generalization of hill-climbing search: HC = BS(k = 1) - a heuristic search algorithm that explores a graph by expanding the most optimistic node in a limited set: orders all partial solutions according to some heuristic, but only a predetermined number of best partial solutions are kept as candidates

Iterative deepening A* (IDA*) (General)

- a graph traversal and path-finding method that can determine the shortest route in a weighted graph between a defined start node and any one of a group of goal nodes - a kind of iterative deepening depth-first search that adopts the A* search algorithm's idea of using a heuristic function to assess the remaining cost to reach the goal

Bayesian network (BN) (General)

- a probabilistic graphical model for representing knowledge about an uncertain domain where each node corresponds to a random variable and each edge represents the conditional probability for the corresponding random variables

Markov model

- a stochastic method for randomly changing systems that possess the Markov property: the conditional probability distribution of future states of the process depends only upon the present state, not on the sequence of events that preceded it

Markov Decision Processes (MDPs) (General)

- a type of Markov model used when the system being represented is controlled - full observability: complete state - in HMMs we infer the behaviour or state of an agent (from partial observations), whereas in MDP we want to control that behaviour: selecting actions such that they maximize the reward (or minimize the cost) over time

Fuzzy Logic (General)

- defined as a many-valued logic form which may have truth values of variables in any real number between 0 and 1 - a concept of partial truth - classical logic values are true or false, but in Fuzzy logic concepts are not always true or false. In this method we consider: • Fuzzy concept: close to door (distance) • Fuzzy modifiers: very close, quite close, close • Fuzzy sets: membership values. Values are on the range [0.0,1.0]

Markov Chains

- depict all possible states, and between states, they show the transition rate, which is the probability of transitioning from one state to another per unit of time - full observability

Beam Search Drawbacks

- not complete, meaning that it might not find a solution - not optimal -might not be the best solution

A* Search algorithm (General)

- used in path-finding and graph traversals - works by making a lowest-cost path tree from the start node to the target node - what makes A* different and better for many searches is that for each node, it uses a function f(n) that gives an estimate of the total cost of a path using that node

Hidden Markov Models (HMMs) (General)

- used to represent systems with some unobservable states - represent observations and observation likelihoods for each state

Hill Climbing

-can be applied to a wide range of optimization problems, including those with extensive search spaces and intricate constraints -very efficient in finding local optima, making it a good option for situations where a solution is required in a short time -can be easily modified and extended to include additional heuristics or constraints

Greedy Search

-take the highest probability at each position in the sequence and predict that in the output sequence - choosing just one candidate at a step might be optimal at the current spot in the sequence, but as we move through the rest of the full sentence, it might turn out to be worse than we thought, given we couldn't see later predicted positions.

Hill Climbing Drawbacks

1- Hill Climbing can get stuck in local optima, meaning that it may not find the global optimum of the problem. 2-Hill Climbing does not explore the search space very thoroughly, which can limit its ability to find better solutions. 3-It employs a greedy approach. This means that it moves in a direction in which the cost function is optimized. The greedy approach enables the algorithm to establish local maxima or minima. 4-No Backtracking: A hill-climbing algorithm only works on the current state and succeeding states (future). It does not look at the previous states.

Hidden Markov Models (HMMs) (Specific)

A HMM model is defined by • A set of states S • A set of observations O • Prior probability P(S0) • Transition probabilities P(St+1|St) • Observation probabilities P(Ot|St) State probabilities depend on the previous state, but observations depend on the current state.

Markov Decision Processes (MDPs) (Specific)

A MDP model is defined as a tuple: ⟨S, A, T, R⟩ • S: states • A: actions • T : transitions P(s ′ | s, a) • R: reward function (or cost) R(s, a) Objective: determine which action to execute for each state to maximize the long term reward (or to minimize cost). Solution: a policy. It is a (complete) mapping from state to actions • It is not exactly a plan, since it is not a sequence of actions • With a policy, if the execution fails the agent can continue executing since it provides an action for every state • The policy maximizes the expected reward (or minimizes the expected cost) • There exists an optimal policy for every MDP.

A* Search algorithm Function (More specific)

A* expands paths that are already less expensive by using this function: f(n) = g(n) + h(n) where: • f(n) = total estimated cost of path through node n • g(n) = cost so far to reach node n • h(n) = estimated cost from n to goal. This is the heuristic part of the cost function

Bayesian network (BN) (Specific)

Defined a Bayesian Network as: • A set of nodes - Each node represents a random variable - Variables can be either discrete or continuous • A set of edges - An edge from node X to node Y: X has a direct influence on Y - It is a Direct Acyclic Graph (DAG) • Probability distributions - Each node X has a Conditional Probability Table (CPT) that defines the effects of its parents -->P(Node | Parents(Node)) - Edges from parents of node X are the only edges directed to X - If a node does not have parents, it is the prior probability P(Node)

Iterative deepening A* (IDA*) (Specifics)

Main features: 1- It's a Depth-First Search algorithm, so it used less memory than the A* algorithm. 2- IDA* is an admissible heuristic because it never overestimates the cost of reaching the goal. 3-It's focused on searching the most promising nodes, so, it doesn't go to the same depth everywhere.

Fuzzy Logic (Specific)

To compute a fuzzy solution, we need to find out how to: • Define terms such as near, right, slowly, very, etc. • Combine terms using AND, OR and other connectives • Use a single rule • Combine all rules into one final output


Set pelajaran terkait

Chapter 14:Psychological Disorders

View Set

Deuteronomy 3 - Flashcard MC questions - Ted Hildebrandt

View Set

Intro to Supply Chain Rutgers Chapter 5

View Set

RELIABILITY; Q1 Discuss research on reconstructive memory.

View Set

Legal Environment of Business Final Review

View Set

Chapter 25: The Dental Hygiene Care Plan

View Set