CSE 3521 Midterm 1
Open set for DFS
Stack
Open set for DLS
Stack where nodes at the limiting depth have no successors
Open set for IDS
Stack where nodes at the limiting depth have no successors
An inference procedure is complete if we can...
find a proof for any sentence that is entailed
If entailment is knowing that the needle is in the haystack, then inference is....
finding it
Agent = Architecture + Program
Architecture is the computing device + Program implements the agent function mapping of percepts to actions
Minimax algorithm
Determines the best move for some agent MAX taking into account it's playing against some agent MIN which attempts to minimize its score
Sensors
Devices that collect input from the environment and provide information that the CPU can respond to.
Four categories of AI:
(1) Thinking humanly, (2) Thinking rationally, (3) Acting humanly, (4) Acting rationally
Two central AI concepts
(1) how we represent knowledge and (2) how we reason with it
A problem is defined by four items...
(IsGAP) - Initial state, goal test, actions, path cost
PEAS description
- (P)erformance Measure - (E)nvironment - (A)cutators - (S)ensors
A simple knowledge based agent needs to know what?
- current state of the world - how to infer things from percepts - how world evolves over time - what its goal is - what actions to take in various circumstances
Rationality is concerned with...
...expected success given what has been perceived
Define rationality
A "rational" system "does the right thing" whereas humans make mistakes
Admissible Heuristic
A heuristic that never overestimates the cost to reach the goal.
Problem-solving agent
A kind of goal-based agent that decides what to do by finding sequences of actions that lead to desirable states
Proof
A record of inference procedure operations
A solution is...
A sequence of operators leading from the initial state to a goal state
Turing Test
A test proposed by Alan Turing in which a machine would be judged "intelligent" if the software could use conversation to fool a human into thinking it was talking with a person instead of a machine.
Performance measure
A way to evaluate the agent's success
Effectors
Affects the environment
Percept
Agent's perceptual inputs at any given instant
Agent/Robot
An Agent is anything that perceives its environment through sensors and acts upon that environment through effectors.
Rational agent
An agent which does the right thing
Skolem constant
Arbitrary substitution for a there exist statement's variablee
Game
Any multi-agent environment (Life is a game)
Time Complexity ordering across algorithms
BFS < IDS < DLS< Minimax w/ pruning, DFS = Greedy A* depends a lot on heuristic
Optimality across algorithms
BFS: yes UCS: ehhh its okay DFS: no DLS: no IDS: yes Greedy: no Minimax w/o pruning: yes Minimax w/ pruning: optimal against optimal component, even more optimal against non-optimal component A*: yes
Completeness across algorithms
BFS: yes UCS: yes DFS: no DLS: yes if depth limit big enough, no otherwise IDS: yes Greedy: no Minimax w/o pruning: yes Minimax w/ pruning: yes A*: yes
Autonomous behavior
Behavior is determined by its own experience
Strong AI view
Build a mind in a computer
Knowledge
Codified experience of agents
Thinking humanly
Cognitive science. Requires understanding of the human brain (which can either be top-down = from the behavior to the rationale or bottom-up = from the neurological data to the behavior)
Percept "sequence"
Complete history of everything an agent has perceived
Simple reflex agents
Condition-action rules on current percept
What are the four stages of a problem solving agent?
Decide on goal, formulate problem, search, execute solution
"Society of mind"
Each mind is made of many smaller agents
Dynamic
Environment can change while agent is thinking
Static
Environment doesn't change while agent is thinking
Agent environments
Environments that are partially observable, stochasic, sequential, dynamic, continuous, and multi-agent are HARDEST to model
Episodic
Everything is its own atomic moment and the state of one episode doesn't affect the state of hte next
Ideal Rational Agent
For each possible percept sequence, do whatever action is expected to maximize its performance measure, using evidence provided by the percept sequence and any built-in knowledge
Causal rule
From causes to effects
Diagnostic rule
From effects to causes
Uninformed search
Given no information about problem (other than its definition)
Informed search
Given some idea of where to look for solutions (outside intuition)
An optimal solution...
Has lowest path cost
Space complexity ordering across algorithms
IDS < DLS < DFS < BFS < UCS
Monotonic
If new sentences are added to KB, all sentences entailed by original KB are still entailed by new KB
Non-autonomous behavior
If no use of percepts then system has no autonomy
Agent program
Implements the agent function for an agent
Learning agents
Improves over time
Weak AI view
Not a mind, but good intelligent process
What are the two things that make up a state space?
Initial state + actions available
Why is AI difficult?
Intelligence is not very well defined or understood
What is intelligence?
Intelligence is the computation part of the ability to achieve goals in the world
What is Artificial Intelligence?
It is the science and engineering of making intelligent machines, especially intelligent computer programs.
Stochastic
Not deterministic
Thinking rationally
Logic. What are correct reasoning processes?
Utility-based agents
Maximizes agent's happiness
KB | - i gets us a
Means that a is derived from KB by inference i
Multi-agent
Multiple agents to worry about
Expert
Narrow specialization and substantial competence
For all x there exists y
Needs skoklem function SK1(x) to get us the y, since they could be different and varied y
Are all admissible heuristics consistent?
No
Omniscience
Omniscient agent knows actual outcome of its actions and can act accordingly
Entailment
One fact follows logically from another
Single agent
Only one agent to worry about
a >= b
Only safe for earliest move strategies of Minimax
Discrete
Operations are distinct and separate
Continuous
Options are not distinct and separate (moving through real values)
AI has so many ****ing foundations
Philosophy, mathematics, economics, neuroscience, psychology, computer science, control theory and cybernetics, computational linguistics
Logic
Precisely defined syntax and semantics for a knowledge representation language
Open set for UCS
Priority queue
Open set for BFS
Queue
Acting rationally
Rational behavior = doing the right thing. Doesn't necessarily involve thinking (i.e., hot stove reflexes)
What two things do we need to make inferences?
Representation and reasoning. The point of our AI is to make inferences, so it needs the representation of the knowledge and the reasoning processes acting on the knowledge
For expressing knowledge as a language, we need what two things?
Semantics and syntax, just like any language
Partially observable
Sensors do not give complete state of environment at each time point
Fully observable
Sensors give complete state of environment at each time point
Why have a knowledge base?
So we can tell it things which we can ask it later (kind of like your mom)
Agent function
Specifies which action to take in response to any given percept sequence
Sequential
States bleed over and affect each other
Consistent Heuristic
The difference between connected states is always smaller than or equal to the actual cost
Deterministic
The next state of the environment is completely determined by current state + action of the agent
Categorization
Treat different things as equivalent - respond in terms of class membership rather than individually. Reduces complexity.
The Knowledge Hypothesis
To achieve a high level of problem solving competency, a symbol system must use a great deal of domain, task and case specific knowledge
Goal-based agents
Uses goals and planning to make a decision
Acting humanly
Turing Test. Making a human imitate a computer
Who is involved in building knowledge systems?
Users, experts, and knowledge engineers
Model-based reflex agents
Watches as a model evolves over time
What is the "right thing/rational action"?
Whichever action that will cause the agent to be most successful (we need a way to measure success)
Are all consistent heuristics admissible?
Yes
Is knowledge definite for logical agents?
Yes
Does AI aim at human-level intelligence?
Yes!
Steps of a problem-solving agent
[Step 1] Formulate goal [Step 2] Formulate problem [Step 3] Search [Step 4] Execution phase [Step 5] Find a new goal (repeat)
A sentence is valid if...
it's a tautology
A sentence is satisfiable if...
its true in some instance
To move a particular constant outwards we can use...
negation