Artificial Intelligence - Week 1
Convert input into variable
inp = input('Europe floor?') usf = int(inp) + 1 print('US floor', usf)
immutable
(adj.) not subject to change, constant
To design a rational agent, we must specify the task environment including
- Performance measure - Environment - Actuators - Sensors (PEAS)
structure
Agent's __________________ can be viewed as Agent = Architecture + Agent Program
Percept
Agent's perceptual inputs at given instance
actuators and sensors
Agents interact with environments through __________ and __________ ◼
Continuous
All the percepts and actions cannot be defined beforehand then it is called continuous environment. ◼Taxi driving and automated car driving system. There could be a route from to anywhere to anywhere else
Single agent
An agent operating by itself in an environment. ◼ Single agent: Tetris in single-player mode
Fully observable
An agent's sensors give it access to the complete state of the environment at each point in time and all information to complete target tasks. Image recognition operates in fully observable domains. Other examples include 8 puzzle problem, Block-world problem, Sudoku puzzle
Discrete
An environment is said to be _______ if there are a finite number of actions that can be performed within it. Examples: Discrete environment: A game of chess or checkers where there are a set number of moves
environments
PEAS descriptions define task ____________
Modular
The % symbol in Python is called the Modulo Operator. It returns the remainder of dividing the left hand operand by right hand operand. It's used to get the remainder of a division problem. 7 % 2 = 1 3 % 4 = 3
Episodic
The agent's experience is divided into atomic/discrete "episodes" ◼ each episode consists of the agent perceiving and then performing a single action , and the choice of action in each episode depends only on the episode itself.◼ No link between the performances of an agent in different scenarios.◼ Environments are simpler from the agent developer's perspective because the agent can decide what action to perform based only on the current episode◼ Example:◼ An AI that looks at radiology images to determine if there is a sickness is an example of an episodic environment. One image has nothing to do with the next.◼ Mail sorting system
Static
The environment remains unchanged while an agent is deliberating , considering, or performing some task). ◼ Example:◼ Vacuum cleaner environment, chess without a clock
Tuple
_______ are used to store multiple items in a single variable. A ________ is a collection which is ordered and unchangeable. The key difference between the tuples and lists is that while the tuples are immutable objects the lists are mutable. This means that tuples cannot be changed while the lists can be modified. ________ are more memory efficient than the lists. ________ are written with round brackets. _________ items are indexed, the first item has index [0], the second item has index [1] etc. thist = ("apple", "banana", "cherry") print(thist)
Lists
_________ are used to store multiple items in a single variable. A Python ____ may contain different types! my =["apple", "banana", "cherry", 1]
Job
_________ of AI is to design an agent program that implements that agent function
Agent Program
___________ _____________an implementation of an agent function.
Agent
_____________ in AI ◼ An _________ is anything that can be viewed as perceiving its environment through sensors and ◼ acting upon that environment through actuators
Multiagent
_________________ is a computerized system composed of multiple interacting agents interacting within an environment.◼ can be used to solve problems that are difficult or impossible for an individual agent or a monolithic system to solve.◼ Example:◼ Multiagent: Tetris in two-player mode and automated driving
Modules
are files that can be imported into your Python program. ◼ Example is the math module ◼ we import a module to use its contents ◼ We precede all operations of math with math.xxx ◼ math.pi, for example, is pi. ◼ math.pow(x,y) raises x to the yth power.
class syntax
class Person: def __init__(self, name, age): self.name = name self.age = age
function syntax
def myfunc(self): print("Hello my name is " + self.name)
Environments are categorized along several dimensions:
observable? deterministic? episodic? static? discrete? single-agent?
Interpreted languages, in contrast, must be
parsed, interpreted, and executed each time the program is run
print on same line
print("Hello World", end =" ") print(Hello World", end = "@") print("Hello World") Hello World Hello World@Hello World
matplotlib:
python 2D plotting library which produces publication quality figures in a variety of hardcopy formats a set of functionalities similar to those of MATLAB
Multi-line commenting
triple single or double quotes: ''' ''' or """ """
Adversarial Search
◼ Convention: ◼ first player is called MAX, 2nd player is called MIN ◼ MAX moves first and they take turns until game is over ◼ Winner gets reward, loser gets penalty ◼ Utility values stated from MAX's perspective Initial state and legal moves define the game tree◼ A game tree is a tree where nodes of the tree are the game states and Edges of the tree are the moves by players. Game tree involves initial state, actions function, and result Function.◼ MAX uses game tree to determine next move
Properties of Task Environment
◼ Fully observable vs. Partially observable ◼ Deterministic vs. Stochastic ◼ Episodic vs. Sequential ◼ Static vs. Dynamic ◼ Discrete vs. Continuous ◼ Single agent vs. Multiagent
Sequential
◼ In ____________ environment as per the name suggest the previous decision can effect on mutual decision ◼ the next action of the agent depends on what action has to be taken previously and what action is to be supposed taken in the future◼ Examples:◼ On Facebook, one posted message and message followers◼ In automatic car driving, initiating brake needs to press clutch◼ In chess and checker games, previous move can affect all the moves
Alternatively, you can run instructions one at a time using interactive mode.
◼ It allows quick 'test programs' to be written. ◼ Interactive mode allows you to write python statements directly in the console window
◼ Agent Function
◼ It is a map from the precept sequence to an action.
◼ Percept
◼ It is agent's perceptual inputs at a given instance.◼
Behavior of Agent
◼ It is the action that agent performs after any given sequence of percepts.
Performance Measure of Agent
◼ It is the criteria, which determines how successful an agent is.
Percept Sequence
◼ It is the history of all that an agent has perceived till date.
The source code editor can help programming by:
◼ Listing line numbers of code ◼ Color lines of code (comments, text...) ◼ Auto-indent source code
Acts of Intelligence
◼ Perception ◼ Language ◼ Knowledge ◼ Reasoning ◼ Learning ◼ Robotics
Agent: Medical Diagnosis system
◼ Performance measure: Healthy patient, minimize costs ◼ Environment: Patient, hospital, staff ◼ Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) ◼ Sensors: Keyboard (entry of symptoms, findings, patient's answers)
Agent types:
◼ Simple reflex agents ◼ Model-based reflex agents ◼ Goal-based agents ◼ Utility-based agents ◼ Learning agents
Problem Formulation - problem space
◼ State space - set of all states reachable from the initial state by a sequence of actions. ◼ Initial state - starting state. ◼ Actions - all possible actions available to the agent. ◼ Given a state s, Actions(s) returns the set of actions that can be executed in s ◼ Transition model - description of what each action does. A successor is any state reachable from a given state by applying a single action. ◼ Path - a sequence of actions causing you to move from one state to another. ◼ Path cost - some paths are more costly than others. We have some function that assigns cost to a path ◼ Goal test - how do we know we are in a goal state? Apply function to given state to see if it is goal state
Games as Search
◼ States: ◼ Initial state: ◼ Successor function: ◼ Terminal test: ◼ Utility function:
Problem Formation - 8 Queens
◼ States: any arrangement of 0-8 queens on the board is a state ◼ Initial state: no queens on the board ◼ Actions: add a queen to any empty square ◼ Path cost: Time ◼ Goal test: 8 queens are on the board, none attacked
Components:
◼ States: board configurations ◼ Initial state: the board position and which player will move ◼ Action function: returns possible moves ◼ Successor function: returns list of (move, state) pairs, each indicating a legal move and the resulting state ◼ Terminal test: determines when the game is over ◼ Utility function: gives a numeric value in terminal states (e.g., -1, 0, +1 in chess for loss, tie, win)
Model-based reflex agents
◼ They use a model of the world to choose their actions. They maintain an internal state.◼ Model − knowledge about "how the things happen in the world" .◼ Internal State − It is a representation of unobserved aspects of current state depending on percept history.◼ Updating the state requires the information about −◼ How the world evolves.◼ What my action do/How the agent's actions affect the world.
Learning agents
◼ allows the agent to operate in initially unknown environments and to become more competent than its initial knowledge alone might allow◼ A __________ agent can be divided into four conceptual components◼ learning element, which is responsible for making improvements◼ performance element, which is responsible for selecting external actions◼ Critic on how the agent is doing and determines how the performance element should be modified to do better in the future ◼ problem generator, which is responsible for suggesting actions that will lead to new and informative experiences.
Stochastic means
◼ the environment changes while agent is taking action, hence the next state of the world does not merely depends on the current state and agent's action.◼ Most real-world AI environments are not deterministic (stochastic)◼ Self-driving vehicles are a classic example of stochastic AI processes as the agent cannot control the traffic conditions on the road.◼ Other example: physical world, Robot on Mars
rational agent
A _________ ___________ should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful
Strong AI
A machine with strong A.I. is able to think and act just like a human. It is able to learn from experiences.
rational
A perfectly _______________ agent maximizes expected performance
programs
Agent _______________ implement (some) agent functions
!, @, #, $, %
An identifier CANNOT start with a digit or use special symbols like
Performance measure
An objective criterion for success of an agent's behavior
Printing a List of Items
Comma-Separated print("the answer is", 6 * 7)
Not
Correct or Not 123_ABC
Correct
Correct or Not Ab_123
Definition of Rational Agent
Definition of _________ ____________: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
Integrated Development Environment (IDE)
Eclipse, Pycharm, Anaconda, Komodo, Wing, Pyscripter, IDLE, text editor
agent function
Internally, the ___________ ______________ for an artificial agent will be implemented by an agent program.
Vacuum Actions
Left, Right, Suck
Weak AI
Machines with weak Artificial Intelligence are made to respond to specific situations, but can not think for themselves
Actuators/Effectors
Moveable parts of Agent
Actuators/Effectors:
Moveable parts of Agent
Vacuum PEAS
Performance Measure: Clean, fast Environment: Two places and one vacuum Actuators: Action Sensors: cameras and infrared range finders
interpreted
Python is an _____________ language, which means the source code of a Python program is converted into bytecode that is then executed by the Python virtual machine. Python is different from major compiled languages, such as C and C + +, as Python code is not required to be built and linked like code for these languages.
Hence, rational =/ successful
Rational -> exploration, learning, autonomy
Sensors
Something that detects events and changes in the environment
Sensors:
Something that detects events and changes in the environment
immutable
Strings are _______________ sequence of characters
percept table
The __________ ______________ is an external characterization of the agent
agent program
The ___________ _______________ runs on some sort of computing device with physical sensors and actuators (called architecture ) to produce f[f: P* →A]Agent = architecture + program
function
The agent _____________ describes what the agent does in all circumstances
partially observable
The entire state of the system is not fully visible to an external sensors Examples: Card game (some cards are facing down), Self-driving vehicle
Deterministic
The next state of the environment is completely determined by the current state and the action executed by the agent. ◼ In other words, deterministic environments ignore uncertainty. ◼ Example: Tic-Tac-Toe game, chess◼ Note: If the environment is deterministic except for the actions of other agents, then the environment is strategic.
success
The performance measure describes the ___________ of agents and their actions
Utility based agents
They choose actions based on a preference for each state. Goals are inadequate when −◼ There are conflicting goals, out of which only few can be achieved.◼ Goals have some uncertainty of being achieved and you need to weigh likelihood of success against the importance of a goal.
Perception
Way in which something is understood or interpreted
insert a single quote
\'
Sets
_____ are used to store multiple items in a single variable. this = {"apple", "banana", "cherry", "apple"} print(this) {'banana', 'cherry', 'apple'} this11 = {"abc", 34, True, 40, "male"} print(this1) {True, 34, 40, 'male', 'abc'}
Rational =/omniscient
action outcomes may not be as expected
Pandas:
adds data structures and tools designed to work with table-like data
Compile-time Errors
aka Syntax Errors ◼ Spelling, capitalization, punctuation ◼ Ordering of statements, matching of parenthesis, quotes... ◼ No executable program is created by the compiler ◼ Correct first error listed, then compile again. ◼ Repeat until all errors are fixed
SciPy
collection of algorithms for linear algebra, differential equations, numerical integration, optimization, statistics and more
An identifier
is a name given to entities like class, functions, variables, etc.
Vacuum Percepts
location and contents
agent function
maps from percept histories to actions
Rational =/clairvoyant
percepts may not supply all relevant information
Several basic agent architectures exist:
reflex, model-based, goal-based, utility-based, learning
f-string
score =92.65 printf("The score is {score:4.2f}") The score is 92.65
Dynamic
the environment changes while an agent is deliberating◼ Example:◼ Driving◼ The environment is semi-dynamic if the environment itself does not change with the passing of time, but the agent's performance score does◼ Example:◼ chess without a clock
Architecture
the machinery that an agent executes on.
NumPy
vectorization of mathematical operations on arrays and matrices which significantly improves the performance
Ignoring a Line Break in a Long String
◼ Can split a long string (or a long statement) over several lines by using the \ continuation character as the last character on a line to ignore the line break. print('this is a longer string, so we \split it over two lines') this is a longer string, so we split it over two lines
To use, or call, a function in Python you need to specify:
◼ The name of the function that you want to use (in the previous example the name was print) ◼ Any values (arguments) needed by the function to carry out its task (in this case, "Hello World!"). ◼ Arguments are enclosed in parentheses and multiple arguments are separated with commas. ◼ A sequence of characters enclosed in quotations marks are called a string ◼ E.g., "Hello World"
What is rational at any given time depends on:
◼ The performance measure that defines the criterion of success. ◼ The agent's prior knowledge of the environment. ◼ The actions that the agent can perform. ◼ The agent's percept sequence to date
Simple reflex agents
◼ They choose actions only based on the current percept. ◼ They are rational only if a correct decision is made only on the basis of current percept. ◼ Their environment is completely observable. Condition-Action Rule − It is a rule that maps a state (condition) to an action, defined by percept.
Goal-based agents
◼ They choose their actions in order to achieve goals.◼ Goal-based approach is more flexible than reflex agent since the knowledge supporting a decision is explicitly modeled, thereby allowing for modifications. Goal − It is the description of desirable situations.
Problem Formulation
◼ What actions and states to consider given the goal ◼ state the problem in such a way that we can make efficient progress toward a goal state.
Run-time Errors
◼ aka Logic Errors ◼ The program runs, but produces unintended results ◼ The program may 'crash