AI Dugga: 1
Stochastic
a model of the environment is stochastic if it explicitly deals with probabilities (e.g., "there's a 25% chance of rain tomorrow")
Percept
content an agent's sensors are perceiving
Sensors
Are used by agents to percieve the environment
Task environment
"problems" to which rational agents are the "solutions."
Utility-based agents
A general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent, based on it's goals. Because "happy" does not sound very scientific, economists and computer scientists use the term utility instead. Utility provides a way in which the likelihood of success can be weighed against the importance of the goals.
Autonomy
A rational agent should be autonomous—it should learn what it can to compensate for partial or incorrect prior knowledge
K-means clustering
Algorithm used in unsupervised learning. Clustering helps us understand our data in a unique way - by grouping things together into clusters.
Partially observable
An environment might be partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data—for example, a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in other squares, and an automated taxi cannot see what other drivers are thinking.
Bayes' theorem
Bayes Theorem provides a principled way for calculating a conditional probability. It is a deceptively simple calculation, although it can be used to easily calculate the conditional probability of events where intuition often fails.
Definiera "Acting Rationally" (fyra kvadranterna)
Computational Intelligence is the study of design of intelligent agents. AI... is concerned with intelligent behaviour in artifacts.
Information gathering
Doing actions in order to modify future percepts— sometimes called information gathering
Environment class
Experiments are often carried out not for a single environment but for many environments drawn from an environment class. For example, to evaluate a taxi driver in simulated traffic, we would want to run many simulations with different traffic, lighting, and weather conditions. We are then interested in the agent's average performance over the environment class.
External/internal characterization of the agent
External could be a table which the data is recorded in. Internal is when the agent is implemented by an agent program.
Rational Agent
For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
Multi-agent
For example, an agent playing chess is in a two- agent environment
Single-agent
For example, an agent solving a crossword puzzle by itself is clearly in a single-agent environment
Unknown "laws of physics" of the environment
For example, an unknown environment can be fully observable—in a new video game, the screen may show the entire game state but I still don't know what the buttons do until I try them.
Cooperative
For example, in a taxi-driving environment, avoiding collisions maximizes the performance measure of all agents, so it is a partially cooperative multiagent environment.
Competitive
For example, in chess, the opponent entity is trying to maximize its performance measure, which, by the rules of chess, minimizes agent 's performance measure. Thus, chess is a competitive multiagent environment
Continuous state of environment
For example, taxi driving is a continuous-state and continuous-time problem: the speed and location of the taxi and of the other vehicles sweep through a range of continuous values and do so smoothly over time.
Discrete state of environment
For example, the chess environment has a finite number of distinct states (excluding the clock). Chess also has a discrete set of percepts and actions.
hierarchical clustering
Hierarchical clustering starts by treating each observation as a separate cluster. Then, it repeatedly executes the following two steps: (1) identify the two clusters that are closest together, and (2) merge the two most similar clusters. This iterative process continues until all the clusters are merged together. This is known as agglomerative hierarchical clustering.
Unobservable
If the agent has no sensors at all then the environment is unobservable
Fully observable environment
If an agent's sensors give it access to the complete state of the environment at each point in time
Dynamic environment
If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent
Static environment
If the environment cannot change while an agent is deliberating, then we say the environment is static for that agent
Semi-dynamic environment
If the environment itself does not change with the passage of time but the agent's performance score does, then we say the environment is semidynamic
Deterministic
If the next state of the environment is completely determined by the current state and the action executed by the agent(s), then we say the environment is deterministic
Nondeterministic
If the next state of the environment is not fully determined by the current state and the action executed by the agent(s), then we say the environment is nondeterministic
Minimax algorithm
Minimax is a kind of backtracking algorithm that is used in decision making and game theory to find the optimal move for a player, assuming that your opponent also plays optimally.
Expected performance / Actual performance
Rationality maximizes expected performance, while perfection maximizes actual performance
Definiera "Acting Humanly" (fyra kvadranterna)
The art of creating machines that perform functions that require intelligence when performed by people. The study of how to make computers do things at which, at the moemnt, people are better.
Decision tree Algorithm
The decision tree Algorithm belongs to the family of supervised machine learning algorithms. It can be used for both a classification problem as well as for regression problem. The goal of this algorithm is to create a model that predicts the value of a target variable, for which the decision tree uses the tree representation to solve the problem in which the leaf node corresponds to a class label and attributes are represented on the internal node of the tree.
Definiera "Thinking Humanly" (fyra kvadranterna)
The exciting new effort to make computers think .. machines with minds, in the full and literal sense. [The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving, learning.
kNN Algorithm
The k-nearest neighbors (kNN) algorithm is a simple, easy-to-implement supervised machine learning algorithm that can be used to solve both classification and regression problems.
Simple reflex agents
The simplest kind of agent is the simple reflex agent. These agents select actions on the basis of the current percept, ignoring the rest of the percept history
Definiera "Thinking Rationally" (fyra kvadranterna)
The study of mental faculties through the use of computational models. The study of the computations that make it possible to perceive, reason and act.
Vilka är de fyra kvadranterna (från föreläsning 1b)?
Thinking Humanly, Thinking Rationally, Acting Humanly, Acting Rationally.
Performance measure
This is what reward the agent for it's performance
Performance measure
When an agent is plunked down in an environment, it generates a sequence of actions according to the percepts it receives. This sequence of actions causes the environment to go through a sequence of states. If the sequence is desirable, then the agent has performed well. This notion of desirability is captured by a performance measure that evaluates any given sequence of environment states.
Agent program
a concrete implementation, running within some physical system
Agent
agent is anything which perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or may use knowledge.
Environment
an encapsulation of the environment where your machine learning training happens
Tabulating
arrange (data) in tabular form
Learning
as the agent gains experience this may be modified and augmented. There are extreme cases in which the environment is completely known a priori and completely predictable. In such cases, the agent need not perceive or learn; it simply acts correctly.
Percept sequence
complete history of everything the agent has ever perceived
Exploration
example of information gathering is provided by the exploration that must be undertaken by a vacuum-cleaning agent in an initially unknown environment
Software agent
example: trades on auction and reselling Web sites deals with millions of other users and billions of objects, many with real images.
A priori
formed or conceived beforehand
Model-based reflex agents
keep track of the part of the world it can't see now. That is, the agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state.
Omniscience
omniscient agent knows the actual outcome of its actions and can act accordingly; but omniscience is impossible in reality
Goal-based agents
the agent needs some sort of goal information that describes situations that are desirable—for example, being at a particular destination as a taxi.
Episodic task environment
the agent's experience is divided into atomic episodes. In each episode the agent receives a percept and then performs a single action
Acturators
the component in any machine that enables movement
Sequential task environment
the current decision could affect all future decisions. Chess and taxi driving are sequential: in both cases, short-term actions can have long-term consequences.
Known - "laws of physics" of the environment
the outcomes for all actions are given. It is quite possible for a known environment to be partially observable—for example, in solitaire card games, I know the rules but am still unable to see the cards that have not yet been turned over.
PEAS (Performance, Environment, Actuators, Sensors)
used to specify the setting for an intelligent agent design
Consequentialism
we evaluate an agent's behavior by its consequences