Questions
What is the most common way to a structure a CSP?
We can make a tree for variables who only have a single path between each other. With this we can make cyclic subsets which turn the constraint graph into a tree. For each subset we analyze values from the remaining variables and remove those values that are inconsistent with the assignment for S, if the CSP has a solution then we return as an assignment for S.
What can be done to improve search quality within anon-deterministic search environment?
We can use an AND-OR search tree to gain insight on how an environment will react to agent action.
How can search be done with partial observability?
We can utilize belief states to increase observability. Belief states use action-percept sequences to identify possible states.
What is search inference and how is it commonly done?
When a value is selected for a variable we use forward checking to make inferences from that decision for that domain. Inferences allow for removing values for other variables that are inconsistent with current assignment.
What is a frontier and how is it commonly represented?
A frontier contains the nodes currently visible to the agent. The frontier is represented as, and is essentially a queue.
What must be done before an agent can begin searching for a solutions?
A goal must be established and a problem must be defined?
T/F: DFS is faster than BFS?
False
What is the iterative deepening algorithm?
Iterative deepening takes the best of Breadth First and Depth first while including a dynamic limit. It expands the nodes branch by branch until the limit is reached, if the goal has not been identified then the iterator is deepened(expanded).
What is local beam search?
Local beam search starts with k-random points and generates children for those k-random points. Once the children have been generated the algorithm selects k-best points from all current points and continues.
What is local search?
Local search is common when path cost is not a focus and just obtaining a goal state is. With local search we start from a single state, rather than search the entire search space.
What is the benefit of using local search?
Local search is useful when memory usage would like to be minimized and path to goal is not considered relevant.
What is the minimax algorithm?
Minimax is an algorithm for tree searching, it is commonly used for AI games. Minimax is adversarial, generally, there are two players one is max, who is trying to maximize their output value, and the other player is min who is trying to minimize maxes future output.
What must a computer be capable of in order to pass turing test?
NLP + Knowledge Representation + Automated Reasoning + Machine Learning
What is pruning?
Pruning is the process of discarding irrelevant nodes and their successing leaves.
What is rationality in terms of AI?
Rationality is the basis of whether something being done is right or not. If the action being performed maximizes the expected value based on the performance measure then that is rationality.
Episodic vs Sequential?
Sequential environments require memory of past actions to determine the next best action.
What are the fall backs of a graph search?
Time consuming due to implementation of explored set.
T/F: A* is an admissible heuristic?
True
What types of constraints are there?
Unary, Binary, Global
How can an admissible heuristic be created?
1) Create and solve relaxed versions of a problem and derive heuristics from those solutions. 2) Create and identify the cost of sub problems of the original problem and derive a heuristic from that. 3) Learn from experience
What is AI and why should it be studied?
AI is the scientific understanding of the underlying factors needed for machine mechanisms to perform thought and intelligent behavior. AI can be used virtually anywhere including home assistance to medical equipment.
What is the alpha-beta pruning algorithm?
Alpha-beta pruning is a search algorithm. It is generally used for adversarial ai games. Alphabeta uses pruning to shorten the amount of observed/expanded nodes, therefore, it minimizes complexity. If alpha is up, the algorithm stops exploring and prunes if it identifies a node who results are greater than or equal to beta. If beta is search, and it identified a node who's results are less than or equal to alpha, then it stops search and prune the nodes.
Known vs Unknown?
An environment is considered to known if the agents understands the laws that govern the environment behavior.
What is used to prevent infinite loops and redundancy of uninformed search algorithms?
An explored set can be implemented. The explored set contains informed about already visited nodes. However, if implemented it will likely increase complexity due to the maintenance and checking of the explored set.
What is a performance measure?
Analysis of agents function to identify if action was desirable. Desirability is dependent on the environment; therefore, it is important that a performance measure be created based on what the desired environment is, not what the desired action from the agent is.
What are the basic steps to solving a CSP?
Assign values to some or all variables, if the assignment does not violate any constraint then the assignment is consistent. If not, search to find solution that is consistent.
Why use asymptotic analysis?
Asymptotic analysis allows for understanding growth of complexity as the input/operations/n grows. This allows us to understand how well our algorithms can run in various problems. Asymptotic uses order terms to understand complexity; therefore, the highest order term is used to estimate the growth. (i.e. n^4 + n^2 + 2 , we assume will be approximately n^4)
What is backtracking and what algorithms are used for backtracking?
Backtracking uses depth first search to choose values one variable at a time and backtracks when a variable has no more legal values left to assign. Backtracking algorithms are: least constraining values, minimum remaining values, degree heuristic.
How does the environment impact agent design?
Because of the many different variables associated with an environment; there are also many different types of techniques for implementing an agent to perform better in specific environment.
What is a turing test?
Behavioral intelligence test to test a machines ability to exhibit intelligence.
what is the bidirectional complexity?
Bidirectional is complete so long as branching factor is finite and the step-costs is a positive constant number. It is optimal assuming the steps costs are identify and both searches use breadth first. The time and space req are O(b^d/2).
What is the complexity of Breadth First Search?
Breadth first is a complete algorithm so long as the branching factor is finite. It is also an optimal algorithm. The space and time reqs are O(b^d).
What is Breadth First Search?
Breadth first search starts with the children of the initial state. It then analyzes each child to check if that is the goal state. Once all current children breadth first search will expand the first child in the queue and begin analyzing that node, and continue on. Therefore, it uses the First In Last Out queue.
What is the difference between classical search and beyond classical search?
Classical searches are those that are done in well developed environments. Where information is known and actions are deterministic, compared to real world scenarios which go beyond classical search.
What is the complexity for A* search?
Complete and optimal
What is the complexity for Memory Bounded search?
Complete and optimal if reachable.
What is the complexity for Greedy best first search?
Complete and optimal. Time and Space is O(b^m)
What does it mean that a search algorithm is complete or optimal?
Complete, means that a search algorithm always finds a solution if one exists. Optimality, means that the search algorithm always finds the least cost path to the solution.
What algorithm did deep blue chess champion use?
Heuristic game tree search algorithm
What is an improvement to hill climbing?
Hill climbing can be improved with random restart hill climbing or stochastic hill climbing.
What is hill climbing search?
Hill climbing search starts with a single state and moves in one direction, that being the steepest choice, and it continues moving in the steepest move until no more options are left and the peak has been reached.
What are the pro's and con's of hill climbing?
Hill climbing uses a small amount of constant memory, however it runs into local maxima issues as the moment it hits its first peak it stops even if that is not the global maxima.
What are the local search techniques?
Hill climbing, gradient ascent or descent, simulated annealing, local beam and stochastic beam search, genetic algorithms.
What types of local search are there?
Hill climbing, simulated annealing, local beam search or genetic algorithms.
Why does a search heuristic need to be admissible?
If a search heuristic algorithm is not admissible and the path goals are overestimated the algorithm may run into issues and cause unnecessary expansions or infinite loops.
Single vs Multiple Agent?
If there is at least one other agent in the environment, it is a multi-agent environment.
How are inferences made for CSP's?
Inferences for CSP's are called constraint propagation. Constrain propagation is done through consistency checks, most common of which is Arc Consistent.
What are the five parts of problem definition?
Initial state. Actions available. Transition model. Goal Check. Path Cost.
What searches were developed as improvements for Greedy Best search, A* search, and memory bounded?
Iterative deepening A*, recursive best first search, simplified memory bounded search.
What is the complexity of iterative deepening algorithm?
Iterative deepening is a complete algorithm so long as the branching factor is finite. The algorithm is optimal. The time req is O(b^d) and Space is O(b*d).
What is the least constraining value backtracking algorithm?
Least constraining value selects next step based on the amount of constraints placed on the variable. In doing so it rules out the least amount of values for neighboring variables.
What is the difference between local and global search?
Local search does not systematically search entire search space, rather starts and updates single states.
What is the minimum-remaining value backtracking algorithm?
Minimum-remaining value algorithm choosing the variable that has the smallest amount of possible values possible. In doing so it can prevent this variable from becoming inconsistent due to other assignments.
What is a partially observable game?
One where information provided is imperfect (i.e. card games)
What is the difference between informed and uninformed search?
Only information provided to an uninformed search is the problem definition. Informed search gains additional information that provides assistance on how to search for goal.
What are characteristics must be understand when developing an agent?
Performance, Environment, Actuators and Sensors.
What is the difference between informed and uninformed search?
Uninformed search is only provided problem definition information, while informed search has additional information related to performing search.
What are the core activities for problem solving?
Use percept to identify initial state. Use initial state to develop a goal. Use initial state and goal to identify a problem. Search for step work towards or achieve goal. Pop step and return.
What are the characteristics of adversarial search and why is meta-reasoning important for adversarial search?
With adversarial search we are within an environment with which we can't systematically search the entire search space, actions are non-deterministic, the state is partially observable and our solution can't be chosen before execution. Meta-reasoning is important for efficient AI because it is the process of reasoning about the relevancy of processes.
What is backjumping?
With backjumping we maintain a conflict set and when we run into a failed search we go back to the most recent assignment within our conflict set.
What is stochastic hill climbing?
With stochastic hill climbing rather than picking the steepest option, stochastic hill climbing selects a random point from all uphill movements.
How can search be done within a continuous space?
Within a continuous state it is common to use Gradient ascent or descent, as these analyze vector spaces.
What types of asymptotic analysis can be performed for complexity?
Worst case (upper bound), best case (lower bound) and average case.
When analyzing searches what are the function g(n) and h(n)?
g(n) is cost to get to that node, while h(n) is the heuristic function that estimates cost to reach goal.
What does a genetic algorithm do?
Genetic algorithms work to combine to states to generate a new path. Genetic algorithms use crossover and mutation to establish new paths.
What must be true for a CSP to be solved?
A CSP must be consistent and complete. This means that all variables must be assigned and values must not go against constraints.
Discrete vs Continuous?
A discrete environment has fixed locations or time intervals.
Full or partial?
A fully observable environment is one that the agent has access to all information in the environment relevant to its task.
What is a stochastic game?
A game that introduces non-deterministic elements.
What is a zero-sum game?
A game where total payoff to all is the same or use of a max-min with two players.
For an agent what is the performance measure based on?
A performance environment is an outline of what is expected of the agent for the environment it is in. It is not based on desired actions.
Why does a search heuristic need to be admissible?
A search heuristic is admissible and it never overestimates the goal cost to reach the goal then the search will be optimal; therefore, a search heuristic must be admissible.
Why worry about complexity of search algorithms?
Complexity and constraints are import to understand when analyzing algorithms because even if a search algorithm completes correctly it may not do so in an optimal manner, or even if a way that is possible. If the search algorithm requires too much space or time then it is infeasible to use.
What is benefit of using constraint satisfaction problems?
Constraint satisfaction problems can solve problems efficiently because they can remove possible values that violate constraints.
What are the seven key dimensions regarding an environment?
D-SOAKED: Deterministicness (Deterministic vs Stochastic), Staticness (Dynamic vs Static), Observability (full vs partial), Agents (Single vs Multiple), Knowledge (Known or unknown), Episodicness (Episodic vs Sequential), Discreteness (Discrete vs Continuous).
What are the advantages and disadvantages to depth first search?
DFS is not optimal or complete; however, DFS performs well in regards to space requirements for performing the search.
What is the degree heuristic backtracking algorithm?
Degree heuristic selects the variable with the highest number of constraints. In doing so it reduces the branching factor for other variables.
What types of uninformed searches are used for AI Agents?
Depth First Search, Breadth First Search, Iterative Deepening Search, Depth Limited Search, Uniform Cast Search, Bidirectional Search.
What is depth first search?
Depth first search creates a frontier from the child nodes of the initial state. It then expands the next (dependent on instructions) node, and continues expanding first child node entered into queue. Therefore, it uses Last in First Out (LIFO) queue.
What is the complexity for depth first search?
Depth first search is a non-complete algorithm that is not optimal. The time complexity for a DFS search is O(b^m) while space is O(b*m).
What is the depth-limited search?
Depth limited is DFS but with a limiter variable included. The limiter variable relates the depth size. Once the search has reached the depth number of the depth limiter it will stop.
What is the complexity of depth-limited search?
Depth limited is not a complete search algorithm and it is not optimal. The time req is O(d^l) and space is O(d*l).
Deterministic vs stochastic?
Deterministic if next state is perfectly predictable given knowledge of previous state and agent's action.
What types of CSP's are there?
Discrete and finite; Discrete and infinite; Continuous
With Iterative Deepening A* search, what is used to cutoff search?
F-cost = g(n) + h(n) (i.e. total path cost)
T/F: The turing test defines the conditions under which a machine is said to be intelligent?
False
What are the pruning bounds for alpha-beta?
For max, if result is >= beta stop. For min, if result is <= alpha stop.
What is random restart hill climbing?
Random restart hill climbing performs hill climbing but when it hits a peak, it restarts at a random location.
What are the five common agent programs?
Simple reflex, model reflex, utility based, goal based, learning.
What is simulated annealing?
Simulated annealing consists of select random points and if the random point is hotter (closer to goal) it is selected, and if the point is cooler (farther to goal) then a new point is randomly selected.
What is simulated annealing?
Simulated annealing using temperature control variables to identify if a randomly generated variable is hot or colder, if hot then it selects that variable and continues working.
Static vs Dynamic?
Static enviornments do not change while agent deliverates.
What is stochastic beam search?
Stochastic beam search performs same processes as local beam search; however, when k-points is to be chosen it selects randomly.
What are the uniform cost search complexity?
The Uniform Cost search algorithm is complete so long as the branching factor is finite. It is also an optimal algorithm. The time and space reqs are O(b^(1 + C/e)). Where c = cost path of optimal solution and e is the cost per action.
What is arc consistency?
Two variables are arc consistent if all values in the variables domain satisfy the variables binary constraint.
What is the Uniform Cost search algorithm?
Uniform cost is a search algorithm that contains a priority queue that prioritizes nodes that enter based on the cost path. The smallest cost path in the queue is the next exit.
What scenarios must be true in order for bidirectional search to be optimal?
The simultaneous search must have the same step costs and both be done using breadth first search.
What is the relationship between an agent and its task environment?
The task environment is the word in which the agent operates in.
What is the difference between classical search and beyond classical search?
These environments are nondeterministic, partially observable, the agents can't systematically search the entire search space and the solution can't be selected before execution.
What are the ways a system can 'be' like to be considered AI?
Think or Act Humanly. Think or Act Rationally.