ELO

¡Supera tus tareas y exámenes ahora con Quizwiz!

Performance Measures

"The performance measure of a rational agent that defines the criterion of success ." (Hur bra en robot är på att utföra det den ska göra.) When a rational agent is plunked down in an environment, it generates a sequence of actions according to the percepts it receives. This sequence of actions causes the environment to go through a sequence of states. If the sequence is desirable, then the agent has performed well. This notion of desirability is captured by a performance measure that evaluates any given sequence of environment states. A performance measure evaluates any given sequence of environment states. A performance measure is a choice by the designer! A rational agent is an agent that, for each possible perceptselects an action that is expected to maximize its performance measure.

INS

(INERTIAL NAVIGATION SYSTEMS): Combine 3 accelerometers with 3 gyroscopes. Relative pose sensor: - Measure changes in the pose (x, y, z, Extensively used in submarines, cruise missiles, robots

PEAS

(Performance, Environment, Actuators, Sensors). PEAS can describe the task environment. 1. An agent perceives the environment through sensors, and acts on it through actuators. 2. The changes in the environment are evaluated with a performance measure.

gene

- A piece of information that encodes some properties of the agent - Transferred to the agent's children - T.ex. "blå ögon"

learning agents

- when you think of AI so you probably think of a learning agent. A learning agent in AI is the type of agent which can learn from its past experiences or it has learning capabilities and can be thought to act rationally. It starts to act with basic knowledge and then is able to act and adapt automatically through learning. A learning agent has mainly four conceptual components, which are: 1. Learning element: It is responsible for making improvements by learning from the environment 2. Critic: Learning element takes feedback from critic which describes how well the agent is doing with respect to a fixed performance standard. 3. Performance element: It is responsible for selecting external action 4. Problem Generator: This component is responsible for suggesting actions that will lead to new and informative experiences.

feedforward

Data enters at the inputs and passes through the network, layer by layer, until it arrives at the outputs. During normal operation, that is when it acts as a classifier, there is no feedback between layers. This is why they are called feedforward neural networks. The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction—forward—from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.

test loss and training loss

The Loss Function is one of the important components of Neural Networks. Loss is nothing but a prediction error of Neural Net. And the method to calculate the loss is called Loss Function. In simple words, the Loss is used to calculate the gradients. And gradients are used to update the weights of the Neural Net (backpropagation). (Loss is a number indicating how bad the model's prediction was on a single example. If the model's prediction is perfect, the loss is zero; otherwise, the loss is greater. The goal of training a model is to find a set of weights and biases that have low loss, on average, across all examples.) Vid träning så kommer ANN ha ett facit att testa sina bilder mot. Vid validation är det ett nytt set bilder där den ska tillämpa sin kunskap från träningen. - If validation loss > training loss = overfitted. - If validation loss < training loss = underfitted - Försök få validation loss att bli så nära 0 som möjligt.

accuracy

The accuracy of a machine learning classification algorithm is one way to measure how often the algorithm classifies a data point correctly.

bias

The bias value allows the activation function to be shifted to the left or right, to better fit the data. This occurs within the hidden layers. Bias is to provide every node with a trainable constant value (in addition to the normal inputs that the node receives).

data features

Features in a neural network are the variables or attributes in your data set. You usually pick a subset of variables that can be used as good predictors by your model. So in a neural network, the features would be the input layer, not the hidden layer nodes. E.g. If you were using a neural network to classify people as either men or women, the features would be things like height, weight, hair length etc. Each of these would have an initial value in meters, kilograms and so on, and would then be normalized and centered at zero (within-feature) prior to presentation to the system.

mutation

Like heredity, there are many types of mutations in the GA - Very dependent on the genome representationUsually applied after the heredity process Example-Binary array - Change a random number of bits in the array

sensorer

Möjjliggör perception av omvärlden". Sensors include camera, IR, sonar, ultrasound, radar and lidar. A combination of various sensors allows an AI robot to determine size, identify an object and determine its distance.

Accelorometer

Used for estimation of change in position (x,y,z). - Common in airplanes, missiles and submarines since the 50's - Now common in game controls and phones - How they work: 1. F is measured with some device 2. F = m* a ➔ a = F/m 3. Distance is computed by integrating a twice - One accelerometer per axis gives (x,y,z) - Often implemented in silicon

embodied cognition

Views intelligence as an emergent property of the interaction between the world and the agent. Argues That Intelligence Cannot Be Separated from the physical implementation. Meaning that there are alternatives to the Sense-Think-Act metaphor

continuous problem

● "cannot be divided into separate states. The environment may change continuously (weather conditions),the agent may be receiving percepts continuously (infrared sensor) and/or acting in a continuous manner. E.g Taxi driving. There could be a route from to anywhere to anywhere else. Self Driving cars operate in a continuous AI environment, continuous updates are therefore required."

discrete problem

● "have a limited [although arbitrarily large] set of possibilities/moves that can drive the final outcome of the task. E.g. A sudoku grid has a limited number of possible states; and a game of chess. "

Cooperative behaviors

"Behavioral fusion" The representation of behaviors must allow fusion: - Vector addition such as in potential - fields and schema based approaches - Vector addition with dynamic gains - Fuzzy logic

percept

"Exakt vad ser agenten med sin sensor i den stunden". Percept refers to the agent's perceptual inputs at a given time instant; Ideally, an intelligent agent´s actions depend on what it has perceived.

advanced structures representation

"Om detta, så detta - logik" Even more internal structure, exactly what depends on the problem but they're often relations either of components of the model to itself, or components of the model to components of the environment. Relationships between the objects of a state can be explicitly expressed. AI algorithms: first-order logic, knowledge-based learning, natural language understanding. If dogge eat all food -> dogge happy & human sad If dogge not eat all food -> dogge sad & human happy= natural language and logic Each state can be described in a model, where we can represent objects and relationships between them (e.g like with natural language, or with first order logic). This representation is more detailed than assigning values to attributes. We do not need to describe every state with the same attributes, only with the same language."Detta beskriver alltså relationen mellan två olika states."

medium factored representation

"jag ska städa både din och min lägenhet. Have more internal structure, although exactly what will depend on the problem. Båda är smutsiga och för att få båda rena måste jag först rengöra min ELLER din." In a sliding tile puzzle, for instance, this might be a simple heuristic; like the number of tiles out of place. States are described by a fixed set of variables or attributes, each of which can have a value. Easier to see how to transition from one state to another (i.e. which attributes need to be changed in order to reach the new state). För att uppnå D, behöver man antingen gå från A till B eller från A till C och vidare. We can represent uncertainty by leaving an attribute blank.

reproduction

- A sexual reproduction - Copies one genotype to the child - Sexual reproduction - Combines two parent genes to the child - Both asexual and sexual reproduction includes random changes (mutations) - Two important factors: Heredity - Information transfer from parents - Do what we know works Mutation - Explore new possibilities Always a trade off: - Too much inheritance will make the population very sensitive to changes and unlikely to find new solutions to the problem- Mutation will slow down evolution and too much mutation will make it converge to random search Compare with the definition of adaptation

selection

- Selection for reproduction - Usually implemented as a fitness function - A phenotype that is fit to the environment is more likely to reproduce - Calculated on the phenotype, not the genotype! - In natural evolution, this process is called naturalselection - From an evolutionary perspective, wether you are strong, healthy and skilled is only important if it helps your genes to reproduce.

Phenotype

- The complete set of observable traits connected to a specific genotype - The actual frog, "hela grodan" Phenotype (from Greek pheno- 'showing', and type 'type') is the term used in genetics for the composite observable characteristics or traits of an organism. The term covers the organism's morphology or physical form and structure, its developmental processes, its biochemical and physiological properties, its behavior, and the products of behavior. o Fitnessfunction

genotype

- The set of all genes in the agent - The frog's genes , "grodans egenskaper/attribut" A genotype is an organism's set of heritable genes that can be passed down from parents to offspring. The genes take part in determining the characteristics that are observable (phenotype) in an organism, such as hair color, height, etc. An example of a characteristic determined by a genotype is the petal color in a pea plant.

action oriented fission

: For competing sensors. Skillnad gentemot fusion är att varje percept genererar varsitt behavior/action (fission), som sedan kombineras till en action, medan fusion har ett behavior.

key performance criteria

A Key Performance Indicator is a measurable value that demonstrates how effectively a company is achieving key business objectives. Organizations use KPIs at multiple levels to evaluate their success at reaching targets. A Key Performance Indicator (KPI) is a quantitative measure used to evaluate project performance against expected results; they confirm that the project has achieved its objectives. KPI's are measures that can used to demonstrate how effectively an organization is achieving its strategic and operational goals.

knowledgebase

A Knowledge Base (KB) is a set of sentences (formula) that are all connected with conjunction (∧), describing a situation in which the world is in.

sigmoid function

A Sigmoid function is a mathematical function which has a characteristic S-shaped curve. All sigmoid functions have the property that they map the entire number line into a small range such as between 0 and 1, or -1 and 1, so one use of a sigmoid function is to convert a real value into one that can be interpreted as a probability. Sigmoid functions have become popular in deep learning because they can be used as an activation function in an artificial neural network. They were inspired by the activation potential in biological neural networks. The main reason why we use sigmoid function is because it exists between (0 to 1). Therefore, it is especially used for models where we have to predict the probability as an output. Since probability of anything exists only between the range of 0 and 1, sigmoid is the right choice. The function is differentiable.

Deep NN

A deep neural network (DNN) is an artificial neural network (ANN) with multiple layers between the input and output layers. We make the network deeper by increasing the number of hidden layers. For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed.

multilayer

A multilayer perceptron (MLP) is a perceptron that teams up with additional perceptrons, stacked in several layers, to solve complex problems. The diagram below shows an MLP with three layers. Each perceptron in the first layer[1] on the left (the input layer), sends outputs to all the perceptrons in the second layer (the hidden layer), and all perceptrons in the second layer send outputs to the final layer on the

action oriented (fusion)

Action-oriented sensor fusion decomposes a robot's intended actions into motor behaviors. A motor behavior typically requires the perception of some object, act, or. event

Searle's Chinese Room

An argument against Artificial Intelligence involving a room, pieces of paper with questions in Chinese, and a Chinese lexicon containing all possible answers in Chinese.

activation function

An activation function is a function that is added into an artificial neural network in order to help the network learn complex patterns in the data. When comparing with a neuron-based model that is in our brains, the activation function is at the end deciding what is to be fired to the next neuron.An activation function outputs a small value for small inputs, and a larger value if its inputs exceed a threshold. If the inputs are large enough, the activation function "fires", otherwise it does nothing. In other words, an activation function is like a gate that checks that an incoming value is greater than a critical number.

embodied agents

An embodied agent is an agent that exists in a real physical environment.

artificial neurons

Artificial neurons are also called units or nodes. They are the computational unit of an Artificial Neural Network (ANN). These neurons are connected through links (edges), and these links have a strength. Artificial neurons computes an output as a function of all inputs from surrounding nodes

Backpropagation

Back-propagation is the essence of neural net training. It is the practice of fine-tuning the weights of a neural net based on the error rate (i.e. loss) obtained in the previous iteration. Proper tuning of the weights ensures lower error rates, making the model reliable by increasing its generalization.

bias

Biases represent how far off the predictions are from their intended value. Biases make up the difference between the function's output and its intended output. - A low bias suggest that the network is making more assumptions about the form of the output - A high bias value makes less assumptions about the form of the output.

disadvantage of teleoperation

Cognitive tiredness - Caused by delay between issued command and perceived response - Simulator sickness - Similar to motion sickness, but operator not moving - Long (expensive) training - Delays

Sensor Fusion

Combining information from multiple readouts into a single percept. Can be both multiple readouts from the same sensor or multiple readouts from different sensors.

inom subsumption - behavior

Competitive coordination with two mechanisms: - Inhibition. Prevents a signal from being transmitted (I) - Suppression. Blocks and replaces a signal (S)

relation between sensors

Complementary sensors: - All sensors are needed for the task - Provide disjoint types of information - E.g.: An infrared sensor and an ordinary eye to create a percept that can distinguish a mouse from a frog Redundant (Competing) sensors: - The task can be solved with anyone of them - Physically redundant: returns the same signals - Logically redundant: returns the same percept but with different modalities, e.g. IR reflex sensors and an ultrasonic sensor Coordinated sensors: - Used for the same percept but in different situations

wheel odometery

Compute relative motion by estimating motion from shaft encoders, speedometer, steering angle, ... Example: "We have moved for 1 second at 35 cm/sec while turning 0.6 rad/sec. This results in a pose change of ..." - Wheel encoders outputs "ticks" as the wheel rotates. Number of ticks and wheel diameter gives the moved distance. Problems: - The wheels move but not the robot (spinning) - The robot moves but not the wheels (slipping,sliding) - The ground is not flat - All this causes drift when accumulating pose changes (same problem with all kinds of dead reckoning[1] ) Dead reckoning("Död beräkning"), typ att det är svårt att räkna in att roboten driftar eller inte får fäste)

property of the environment

Environmental properties include those physical properties which relate to the environment. The environment in AI is everything that surrounds the agent but not the agent itself. The environment in which an AI functions can be considered as the entity that the agent used to make sense of things around it to eventually act upon things that can be used to effectively solve a problem. t.ex. Proximity sensors: Direction and distance of objects around the robot Position sensors: (x,y,z) for the robot Altitude sensors: (roll, pitch, yaw) for the robot Speed: Change in pose Ambient light level Temperature Identity of objects

action oriented (fashion)

For coordinated sensors. If you have three sensors that give 3 precepts each, then you select one that fits the best for the particular situation. First the lion uses the smell, then using the eyes, and then the mouth. It selects the best function for that situation.

Artificial General Intelligence

General AI is to be able to function in a humanlike way. However, AGI does not exist yet. There are no agents/AI that can behave appropriately in every situation. The AI should be able to sense, plan and act appropriately to the current situation. It should also be able to adapt to new situations and learn what is expected there. AGI looks for a universal algorithm for learning and acting in any environment.

gradient decent

Gradient Descent is an optimization algorithm for finding a local minimum of a differentiable function. Gradient descent is simply used to find the values of a function's parameters (coefficients) that minimize a cost function as far as possible.

the evolutionary process

In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection.

cross talk

In electronics, cross talk is any phenomenon by which a signal transmitted on one circuit or channel of a transmission system creates an undesired effect in another circuit or channel. Cross talk is usually caused by undesired capacitive, inductive, or conductive coupling from one circuit or channel to another.

blackbox, transparency

In science, computing, and engineering, a black box is a device, system or object which can be viewed in terms of its inputs and outputs (or transfer characteristics), without any knowledge of its internal workings (and that could be an issue). - wiki This is generally an issue because it is difficult to diagnose errors or biases with no interpretability, forcing users to blindly trust the algorithm.

types of classification

In the terminology of machine learning, classification is considered an instance of supervised learning, i.e., learning where a training set of correctly identified observations is available. ... An algorithm that implements classification, especially in a concrete implementation, is known as a classifier.▪ Binary classification: Binary classification refers to those classification tasks that have two class labels. Examples include: Email spam detection (spam or not). Churn prediction (churn or not). ▪ Multiclass classification: In machine learning, multiclass or multinomial classification is the problem of classifying instances into one of three or more classes. ▪ Regression: Regression models are used to predict a continuous value. Predicting prices of a house given the features of house like size, price etc is one of the common examples of Regression. It is a supervised technique. ▪ Ranking: ▪ Structured prediction: Structured prediction or structured (output) learning is an umbrella term for supervised machine learning techniques that involves predicting structured objects. For example, the problem of translating a natural language sentence into a syntactic representation such as a parse tree can be seen as a structured prediction problem in which the structured output domain is the set of all possible parse trees. Structured prediction is also used in a wide variety of application domains including bioinformatics, natural language processing, speech recognition, and computer vision.

agent

Inom artificiell intelligens är en intelligent agent en självständig enhet som observerar genom sensorer och verkar på en miljö med hjälp av aktuatorer och riktar dess aktivitet mot att uppnå mål. Intelligenta agenter kan också lära eller använda kunskap för att uppnå sina mål.

Breadth First Search

It explores all the nodes at the present depth before moving on to the nodes at the next depth level.

potential fields nackdelar

Jerky motion (depends on update rates) - Robot and obstacles treated as points (=> risk of collision) - The robot is expected to change velocity and direction instantaneously - Sensitive to local minima. Three approaches: - Injecting randomness - Adding an "avoid-past" schema - Navigation templates - Cyclic behavior. One approach: - Adding an "avoid-past" schema that generates a repulsive force from recently visited areas

LSTM

Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections. It can not only process single data points (such as images), but also entire sequences of data (such as speech or video). For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition, speech recognition and anomaly detection in network traffic or IDSs (intrusion detection systems)

Definition of Machine Learning

Maskininlärning är ett område inom artificiell intelligens, och därmed inom datavetenskapen. Det handlar om metoder för att med data "träna" datorer att upptäcka och "lära" sig regler för att lösa en uppgift, utan att datorerna har programmerats med regler för just den uppgiften. Machine learning is needed for tasks that are too complex for humans to code directly!Machine learning is a highly iterative process that: - improves behaviour (performance- can be measured by accuracy) - on a given task - by experience The machine learning process usually requires lots of computational resources (GPUs, RAM, Time constraints). It also might be difficult to know how to improve its performance.

gyroscopes

Measure rotation with one, two or three DOF Mechanical gyroscopes: - Orientation remains fixed, regardless of any motion of the platform - Based on the physical property conservation of angular momentum Fiber optical gyroscopes: - Estimate angular velocity - Based on the Sagnac effect - Often implemented in silicon - One gyro per axis gives

convolutional NN (CNN)

Networks including convolutional layers are called convolutional neural networks (CNNs). Their key property is that they can detect image features such as bright or dark (or specific color) spots, edges in various orientations, patterns, and so on. These form the basis for detecting more abstract features such as a cat's ears, a dog's snout, a person's eye, or the octagonal shape of a stop sign. It would normally be hard to train a neural network to detect such features based on the pixels of the input image, because the features can appear in different positions, different orientations, and in different sizes in the image: moving the object or the camera angle will change the pixel values dramatically even if the object itself looks just the same to us. In order to learn to detect a stop sign in all these different conditions would require vast amounts of training data because the network would only detect the sign in conditions where it has appeared in the training data. So, for example, a stop sign in the top right corner of the image would be detected only if the training data included an image with the stop sign in the top right corner. CNNs can recognize the object anywhere in the image no matter where it has been observed in the training images.

Turing Test

One method of determining the strength of artificial intelligence, in which a human tries to decide if the intelligence at the other end of a text chat is human.

overfitting

Overtraining occurs if the neural network is too powerful for the current problem. It then does not "recognize" the underlying trend in the data, but learns the data by heart (including the noise in the data). This results in poor generalization and too good a fit to the training data. - The robot becomes too specialized, extremely narrow, and therefore loses its applicability to any other dataset.

pattern recognition

Pattern recognition is the automated recognition of patterns and regularities in data. It has applications in statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Pattern recognition has its origins in statistics and engineering; some modern approaches to pattern recognition include the use of machine learning, due to the increased availability of big data and a new abundance of processing power. However, these activities can be viewed as two facets of the same field of application, and together they have undergone substantial development over the past few decades. A modern definition of pattern recognition is:

perceptron

Perceptron[1] [2] is a single layer neural network. A multi-layer perceptron is called Neural Networks. "Perceptrons are simply computational models of a single neuron"

RNN (recurrent neural networks)

Recurrent Neural Networks(RNN) are a type of Neural Network where the output from the previous step is fed as input to the current step. RNN's are mainly used for: Sequence Classification — Sentiment Classification & Video Classification. Sequence Labelling — Part of speech tagging & Named entity recognition. A recurrent neural network (RNN) is a class ofartificial neural networkswhere connections between nodes form adirected graphalong a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived fromfeedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs. This makes them applicable to tasks such as unsegmented, connectedhandwriting recognitionorspeech recognition

robustness

Robustness is the capacity of a method to remain unaffected by small, deliberate variations in method parameters; a measure of the reliability of a method. Robustness should be evaluated in late development, or early in the method validation process. Robustness can be defined as "the ability of a system to resist change without adapting its initial stable configuration". E.g. For a machine learning algorithm to be considered robust, either the testing error has to be consistent with the training error, or the performance is stable after adding some noise to the dataset.

satisfiability

Satisfiable and unsatisfiable formula. If there is no interpretation that can satisfy the formula, then the formula is not satisfiable. Satisfiability refers to the existence of a combination of values to make the expression true. So in short, a proposition is satisfiable if there is at least one true result in its truth table, valid if all values it returns in the truth table are true.

neuron

The computational unit of the brain. Activity in one neuron can spread to other neurons through synapses and action potentials. An action potential is a qualitative change that is the result of a continuous change of the system variables. - The relation between neuron activity is non-linear. Neurons can be seen as agents: - They have a body, sensors (the dendrites) and actuators (synapses that connect to other neurons)

DFF

The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction—forward—from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.

KNN

The k-nearest neighbors (KNN) algorithm is a simple, easy-to-implement supervised machine learning algorithm that can be used to solve both classification and regression problems. The KNN algorithm assumes that similar things exist in close proximity. In other words, similar things are near to each other. The most significant difference between regression vs classification is that while regression helps predict a continuous quantity, classification predicts discrete class labels. There are also some overlaps between the two types of machine learning algorithms.)

effectors

The parts the robot/AI uses to manipulate its surroundings (e.g. tools, gripping claw, magnets) An effector is any device that affects the environment. A robot's effector is under the control of the robot. Effectors can range from legs and wheels to arms and fingers.

control unit

The robot's "brain". Control systems allow for the movement and function of various parts of the robot, as well as execute a specific set of motions and forces in the presence of unforeseen errors

embodied AI

The study of intelligence as a natural phenomenon. Understanding mechanisms behind human and animal intelligence. "Why should we study theoretical problems when we want to know how the brain works"

outcome qualification (underfitting)

Underfitting can occur when we have too little training data. Underfitting refers to a model that can neither model the training data nor generalize to new data. An underfit machine learning model is not a suitable model and will be obvious as it will have poor performance on the training data.

deliberative hierarchical paradigm

This is a classical AI-approach, AI inspired. 1. The robot operates in a top-down fashion, heavy on planning. 2. The robot senses the world, plans the next action, acts; at each step the robot explicitly plans the next move. All the sensing data tends to be gathered into one global world model.

Declarative Programming

This is a programming style where the programmer specifies what to do, not how the calculation should be done. No execution flow is described. In principle, the instructions can be given in any order.

Known vs. Unknown

This is a property of the agent, rather than of the environment. ● In a known environment, the laws of the environment are part of the knowledge of the agent. The outcomes or outcome probabilities (how an action will modify the environment) are known. A known environment doesn't mean that the environment is fully observable E.g. card game, you know the rules. "Agenten vet vad konsekvensen av en handling kommer bli" ● In an unknown environment, the agent does not know the outcome of its actions. Eg. A game for which the agent does not know the rules."Agenten vet inte vad konsekvensen av en handling kommer bli"

weight

Weights determine the strength and sign of the connection between links. Weight is the parameter within a neural network that transforms input data within the network's hidden layers. *Desto högre vikt, desto större activation i ett "neuron" i the next layer*

foreshortening

When an artist foreshortens, she makes an object appear closer or a distance shorter than it is, to create a sense of depth in a painting or drawing. To foreshorten is to create a kind of optical illusion simply by making lines shorter or angling the perspective in a certain way.

Diffuse and specular reflection

diffuse: Diffuse reflection is the reflection of light or other waves or particles from a surface such that a ray incident on the surface is scattered at many angles rather than at just one angle as in the case of specular reflection. - Diffuse is ok. High probability that an echo will return. specular: Specular reflection is a type of surface reflectance often described as a mirror-like reflection of light from the surface. In specular reflection, the incident light is reflected into a single outgoing direction. - Specular is no good. The sound may return later or not return at all. The former is harder to deal with.

types of learning

o Classification/supervised learning: In a supervised learning model, the algorithm learns on a labeled dataset, providing an answer key that the algorithm can use to evaluate its accuracy on training data. Examples of Supervised Learning Algorithms are; Decision Trees, Linear Regression and Logistic Regression An algorithm for supervised learning: 1. Collect a (large) set of examples 2. Divide them randomly into a training set and a test set 3. Generate a hypothesis using the training set 4. Measure the performance on examples from the test set (e.g. number of correctly classified examples). 5. Repeat until satisfactory performance is achieved. o Clustering/unsupervised learning Unsupervised model, in contrast, provides unlabeled data that the algorithm tries to make sense of by extracting features and patterns on its own. In other words - we show a lot of data to our algorithm and ask it to find patterns in the data by itself. o Reinforcement learning Reinforcement learning is the training of machine learning models to make a sequence of decisions. The agent learns to achieve a goal in an uncertain, potentially complex environment. In reinforcement learning, an artificial intelligence faces a game-like situation. The computer employs trial and error to come up with a solution to the problem. To get the machine to do what the programmer wants, the artificial intelligence gets either rewards or penalties for the actions it performs. Its goal is to maximize the total reward. Tänk på apan och sin juice.

costs of NN

o Databases/datafarms A data farm is essential for large companies who need to gather information, process that information, and distribute the largest amount of solutions in an efficient manner. Data farms are essential for extracting raw data, decoding that raw data, and computing various outputs based on the inputs. Data farming is the process of using designed computational experiments to "grow" data, which can then be analyzed using statistical and visualization techniques to obtain insight into complex systems. These methods can be applied to any computational model. o Computation-time Neural nets can prove to be difficult to train and require a lot of computing power as it performs heavy operations on large amounts of data and may perform poorly if designed poorly. May consume vast amounts of memory, storage and time. Making use of more powerful computational hardware like GPUs cuts training time. Reduce model complexity as much as possible. - Går inte att ha ett hur stort neural networks som helst. Sätter krav på problemspecifikation för företag. Prioriteringar utifrån tillgångar. Tid är pengar. o Energy and hardware costs of Data Collection[1] Among the many challenges in machine learning, data collection is becoming one of the critical bottlenecks. It is known that the majority of the time for running machine learning end-to-end is spent on preparing the data, which includes collecting, cleaning, analyzing, visualizing, and feature engineering. While all of these steps are time consuming, data collection has recently become a challenge due to the following reasons:First, as machine learning is becoming more widely-used, we are seeing new applications that do not necessarily have enough labeled data. + Deep learning may require larger amounts of training data to perform well. Second, unlike traditional machine learning, deep learning techniques automatically generate features, which saves feature engineering costs, but in return may require larger amounts of labeled data Vet ej om detta är rätt, men klistrade in från en rapport om data collection

layers

o Input/output layers -Input layer, the first layer in the neural network. It takes input signals(values) and passes them on to the next layer. It doesn't apply any operations on the input signals(values) & has no weights and biases values associated. -Output layer, the output layer in an artificial neural network is the last layer of neurons that produces given outputs for the program. The output is the weighted sum. o Hidden layers Hidden layers apply transformations to the input data. It is within the nodes of the hidden layers that the weights are applied. For example, a single node may take the input data and multiply it by an assigned weight value, then add a bias before passing the data to the next layer. - Här sker mest beräkning, liknelse med black box. Alla lager i mitten, input ->hidden layer -> output. Specialiserar sig på mer och mer, komplexa komponenter, desto tidigare i processen desto mindre komponenter specialiserar sig neuronen på. I kommande lager kan ett neuron bli aktiverat av t.ex. fyra neuron i tidigare lager.

supervised learning process

o Learning Process ▪ Training: Training a Neural Network means finding the appropriate weights of the Neural Connections thanks to a feedback loop called backward propagation. A training dataset is a dataset of examples used during the learning process and is used to fit the parameters (e.g., weights) of, for example, a classifier. Most approaches that search through training data for empirical relationships tend to overfit the data, meaning that they can identify and exploit apparent relationships in the training data that do not hold in general. ▪ Validation: In machine learning, model validation is referred to as the process where a trained model is evaluated with a testing data set. The testing data set is a separate portion of the same data set from which the training set is derived. The main purpose of using the testing data set is to test the generalization ability of a trained model. Validation set is different from the test set. Validation set actually can be regarded as a part of training set, because it is used to build your model, neural networks or others. It is usually used for parameter selection and to avoid overfitting. ... Test set is used for performance evaluation. ▪ Testing: Testing phase is when your previously trained network is now classifying new unseen data. Test Dataset: The sample of data used to provide an unbiased evaluation of a final model fit on the training dataset.

pose sensors

relative: In computer vision and robotics, a typical task is to identify specific objects in an image and to determine each object's position and orientation relative to some coordinate system. Can give absolute pose by "Dead reckoning": accumulating relative motion Examples of sensors using dead reckoning: - Wheel odometry - Accelerometers and gyroscopes absolute: an absolute position sensor gives information on its position within a given scale or range without the need for a reference point. Eg. GPS: - Radio signals are transmitted from the satellites to the receiver on ground. - The time-of-flight is estimated - Trilateration gives the position (x,y,z) - Multiple receiver antennas can provide full pose (x, y, z,

Percept Sequence

"Allt som vår agenten någonsin har percepterat i hela sin livstid"If we know which action the agent will take for any possible percept sequence, we know everything about the agent.

partially observable problem

"Otherwise, it is partially observable. Eg. An autonomous vehicle (cannot see objects behind a corner

competitive behavior

"Winner takes all", "Arbitration" Fixed priority methods: - Subsumption - "Subsumption light" Dynamic priority methods: - Activity based; Select the most "active" behavior. - Voting based; each behavior suggests MANY responses

mind body environment niche relation

- A robot always requires some kind of energy source. It must be equipped with sensors and effectors in order to perform its task in a particular environment, or more precisely, in a particular ecological niche. For example, ifa a robot has to work at night, it may be better to equip it with IR devices rather than with vision sensors. There is no such thing as an universal robot, because the robot must perform in the real world, which consists of many varied environments to which a particular robot may or may not be suited.

Potential fields fördelar

- Easy to implement and visualize. Robot behavior easy to predict by designer - Support for parallelism. Each field is independent of the others and may be implemented as general software, or even hardware, modules - They can be easily parameterized and configured, also during design or in real-time - The combination mechanism is flexible and can be tweaked with gains to reflect varying importance of sub-behaviors

scheduling

- In computing, scheduling is the method by which work is assigned to resources that complete the work. The work may be virtual computation elements such as threads, processes or data flows, which are in turn scheduled onto hardware resources such as processors, network links or expansion cards. - Automated planning and scheduling, sometimes denoted as simply AI planning, is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory.

monotonic reasoning

- Reasoning can be monotonic (first-order logic) and nonmonotonic. Monotont = Sanningen i en proposition ändras inte när ny information har lagts till. Monotonic is the definite clause logic in which anything that could be concluded before a clause is added and still be concluded after it is added. E.g. All humans are mortal AND Socrates is a human --> Socrates is mortal Monotonic reasoning will adapt to the rule.

search

- Visual search is a type of perceptual task requiring attention that typically involves an active scan of the visual environment for a particular object or feature (the target) among other objects or features (the distractors). - Search in AI is the process of navigating from a starting state to a goal state by transitioning through intermediate states. Almost any AI problem can be defined in these terms. State — A potential outcome of a problem. Transition — The act of moving between states.

weak AI

- Weak/Narrow AI, that is very specialized in one thing only. This is all AI today. Weak AI is limited to solving only certain specified problems. It might be really good at this particular thing (chess for example) but cannot apply this knowledge to an unknown or partially new situation. Classical AI (en del av narrow AI): The study of intelligence on theoretical grounds -Why should we limit ourselves to human intelligence? Symbolic AI (or Classical AI) is the branch of artificial intelligence research that concerns itself with attempting to explicitly represent human knowledge in a declarative form (i.e. facts and rules). basically refers to a kind of intelligence that is strongly symbolic and oriented to logic and language processing. One basic point is the duality body vs. mind. It's in this period that the mind starts to be compared with computer software.

partly vs fullt observable

1. Partly vs. fully observable: ● A task environment is fully observable if the agents' sensors detect all aspects that are relevant to the choice of action. E.g. A thermostat has one task and does always observe the temperature in the room. - "Ser allt relevant i vilken given stund som helst". ● Otherwise, it is partially observable. Eg. An autonomous vehicle (cannot see objects behind a corner). - "Ser inte allt relevant jämt". ● If there are no sensors, the task environment is unobservable

rationality of agents depends on....

1. Possible actions 2. Performance measure 3. Percept sequence = past + current percepts 4. Built-in knowledge (knowledge base, describing a situation in which the world is in.)

Static vs. Dynamic

1. Static vs. dynamic ● If the environment can change while the agent is deliberating (deciding on an action), it is dynamic. Eg. Bomb example and a cooking robot, where the food can burn while deciding whether to switch off the heat or not.- "Om the environment är föränderlig under tiden roboten tar beslut om nästa action" ● If the world does not change, but the outcome does, the environment is semi dynamic. Eg. When the performance measure depends on the time it takes to complete an action. ● If the agent can percept and produce a response without considering the time it takes to produce that response, the environment is static, and will not change. Eg. A sudoku solving agent."The environment är tidsoberoende"

agent function

: "Agent function tillåter oss att räkna ut hur agenten kommer att agera, utifrån vår percept sequence". We can describe the agent with an agent function. An agent function is a map from the percept sequence(history of all that an agent has perceived till date) to an action u = f(y)[action equals percepts sequence]

basic atomic representation

: "jag vill från mitt rum till ett annat rum" No internal structures. Possible states of the environment, with arrows that show how you can go from one environment to another. Atomic models have no internal structure; the state either does or does not match what you're looking for. In a sliding tile puzzle, for instance, you either have the correct alignment of tiles or you do not. Each state is a discrete single entity, we know when it is possible to transition between two states. There are no sub-entities describing each state, it is just a point in a graph.

complete agents

: A robot with artificial intelligence capable of completing difficult tasks is: Self-sufficient: Ability to sustain themselves over extended periods of time. The ability to survive. Embodied: Realized as physical systems capable of acting in the real world Situated: Situated agents sense and act upon the environment from their own perspective Autonomous: Function without supervision, intervention or instruction from another agent

fully autonomous

: Fully autonomous systems arguably not here yet, depending on your definition. E.g. does it work everywhere or only in limited areas? - Many examples of efforts, e.g. by Boston Dynamics - For cars there is a scale with six levels (0-5) between fully manual and fully autonomous -Standard by Society of Automotive Engineers (SAE) 2014 - Tesla Autopilot at level 2 - mostly autonomous but need driver to take over immediately when the system fails

situated agent

A situated agent is an agent that interacts with the environment through its own sensors and actuators. - Has the ability to sense the environment - Has the ability to manipulate the environment

subsumption

A subsumption architecture is a control architecture that achieves rapid real-time responses in a layered collection of preprogrammed, concurrent condition-action rules, where lower layers can function independently of the higher ones, and higher ones utilize the outputs of the lower ones, but do not override them. Each layer has a separate goal

A* search

A* is an informed search algorithm, or a best-first search, meaning that it is formulated in terms of weighted graphs: starting from a specific starting node of a graph, it aims to find a path to the given goal node having the smallest cost (least distance travelled, shortest time, etc.). This function is a combination of the cost of getting to the node and the heuristic evaluation of the node: 𝑓(𝑛) = 𝑔(𝑛) + ℎ(𝑛) The value 𝑓(𝑛) is the estimated cost of the cheapest solution that goes through node 𝑛. Note:A* works for any search problem, but the heuristic function ℎ is problem specific.

Actuators

Actuators are something that control or move things around in a system (effectors move things around in the environment). Acturaters möjliggör alltså att agenten kan använda sin effektor, ex. armen till gripklon

Reactive Agents

An agent that only uses respons (stimuli). Reactive agents select actions on the basis of the current percept, ignoring the rest of the percept sequence. E.g. a vacuum cleaner, just needs to know if its current square is clean or dirty in order to choose its action. They do not need memory. Can be described by if-then rules, but in practice there are more sophisticated implementations. Pros: Simple, transparent, effective in dynamic environments, known environments.Reactive agents do not maintain the internal state, unlike deliberative agents. Finding a difference between reactive agents and deliberative agents can be indistinct though. It can simply be said that an agent that has no internal state is a reactive agent.

artificial intelligence

Artificial intelligence is "intelligence" demonstrated by machines. AI programs may mimic or simulate cognitive behaviours or traits associated with human intelligence such as reasoning, problem solving and learning. Can be thought of as when autonomous agents are able to learn from interacting with their environment and exhibit emergent behaviors as a result. When autonomous agents can sense, reason, and act in response to the stimulus as well as prior knowledge it has acquired through previous experience - this is only learning agents. All agents kan inte det som vi lärt oss I would define an AI as something that can perform tasks as performed by humans would be considered intelligent. These tasks involve things such as problem solving, engage in intelligent behaviors and decision making. This requires abilities such as reasoning, understanding and processing of natural language, memory and perception.

hybrid paradigm

Combination of both the above.Big task, plans and then divides this into smaller subtasks. Then it plans which behaviour is appropriate. Planerar, utför sin handling/beteende (varje gång den utför handling så är det som reactive paradigm), och uppdaterar sig allt eftersom. Någonstans däremellan kan den planera igen.

viktigaste skillanden mellan deklarativ och imperativ programmering

Den viktigaste skillnaden mellan deklarativ och imperativ programmering är att deklarativ programmering fokuserar på vad programmet ska åstadkomma medan imperativ programmering fokuserar på hur programmet ska uppnå resultatet.

rational agent

Doing the right things. A rational agent knows what the right action at the right time is, the agent checks if it changes the environment in the way that it want it to change. A rational agent could be anything which makes decisions, as a person, firm, machine, or software. It carries out an action with the best outcome after considering past and current percepts (agent's perceptual inputs at a given instance). An AI system is composed of an agent and its environment.

Domain Adaptability

Domain adaptation is the ability to apply an algorithm trained in one or more "source domains" to a different (but related) "target domain". Domain adaptation is a subcategory of transfer learning.

advantages with reactive paradigm

Fast reaction time - Requires less memory and CPU power - Easy to implement and expand - Behaviors are independent - No modelling necessary - Makes no Closed World Assumption - Works in dynamic environments that are difficult to characterize and contain a lot of uncertainty

first order logic

First-order logic is another way of knowledge representation in artificial intelligence, representing natural language statements in a concise way. It is an extension to propositional logic. It is a formal language for representing information so that conclusions can be drawn. First-order logic is a powerful language that develops information about the objects in a more easy way and can also express the relationship between those objects. First-order logic (like natural language) does not only assume that the world contains facts like propositional logic but also assumes the following things in the world: - Objects: A, B, people, numbers, colors, wars, theories, squares, pits, wumpus.... - Relations: It can be unary relation such as: red, round, is adjacent, or n-any relation such as: the sister of, brother of, has color, comes between - Function: Father of, best friend, third inning of, end of, ...... First-order logic also has two main parts: o Syntax: defines the sentences in the language/grammar o Semantics: defines the meaning of sentences or the truth of a sentence in a world

semi autonomous

General idea: Teleoperation for hard tasks, Autonomy for simple tasks. The human operator: - Delegates a task - Monitors the process - Interrupts for hard sub-tasks, or if anything goes wrong E.g. the Mars rover Sojourner: - DRIVE TOWARD THAT STONE - Obstacle avoidance when driving towards the stone We can levitate the issues of teleoperation with two approaches: Tele-presence - improving HRI with e.g. virtual/augmented reality - Head-mounted displays to aid in operating advanced military vehicles Semi-autonomy - More intelligent system => less demanding for operator - Not fully autonomous, we still keep a human in the loop

goal-based agent

Goal based agents choose their action in order to achieve a specific goal. Creates a model of the world to help it understand it. Acts reactively based on the model (internal representation). These kinds of agents make decisions based on how far they are currently from their goal. They can choose and complete multiple goals, one after another. You can break down your task into smaller goals. Pros: Adaptation to the environment, no need to anticipate all the scenarios, good for dynamic environments. The research areas related to choosing the right actions to execute a goal are those of search and planning.

non - monotonic reasoning

Icke-monoto = Sanningen i en proposition kan förändras när ny information lagts till. A logic is non-monotonic if some conclusions can be invalidated by adding more knowledge. E.g. Birds typically fly AND Tweety is a bird --> Tweety presumably flies? (but penguins don't fly? contradictory situation) Non-monotonic reasoning would found an error, would crash, bc we cannot say that the bird can fly and cannot fly at the same time.

validity

If all interpretations are true then the formula is valid. So in short, a proposition is valid if all values it returns in the truth table are true. In general, to determine validity, go through every row of the truth-table to find a row where ALL the premises are true AND the conclusion is false. Can you find such a row? If not, the argument is valid. If there is one or more rows, then the argument is not valid. ( if a formula is valid, it is also satisfiable!)

frame problem

In artificial intelligence, the frame problem describes an issue with using first-order logic(FOL) to express facts about a robot in the world. Representing the state of a robot with traditional FOL requires the use of many axioms that simply imply that things in the environment do not change arbitrarily (godtyckligt). For example, Hayes describes a "block world" with rules about stacking blocks together. In a FOL system, additional axioms are required to make inferences about the environment (for example, that a block cannot change position unless it is physically moved). The frame problem is the problem of finding adequate collections of axioms for a viable description of a robot environment. The frame problem describes the difficulty of knowledge representation for AI and explores how to model change. It assumes a model of logical propositions that can change with any given action executed at any given point in time and explores system-environment interactions. That is how to know when actions make changes in the environment while everything else within the "frame" is implicitly assumed to remain the same.Reactive AI and frame problem, if the robot do not represent the world, the frame problem disappears.

Utility-based agents

In most cases, there are more or less good ways to reach a goal. Some ways are quicker, safer, more reliable or cheaper than others. Sometimes we cannot capture all of the behaviours we want from the agent in one goal - an autonomous vehicle might want to balance safety, passenger satisfaction, speed to destination etc. Utility-based agents choose the action or sequence of actions that are expected to maximise a utility function. Utility functions assign a numerical value to each state of the environment, so that a state that satisfies more preferences has a higher value. Eg. gps - har flera möjliga vägar men vilken är bäst? When there are multiple possible alternatives, then to decide which one is best, utility-based agents are used. They choose actions based on a preference (utility) for each state. Sometimes achieving the desired goal is not enough. We may look for a quicker, safer, cheaper trip to reach a destination. Agent happiness should be taken into consideration. Utility describes how "happy" the agent is. Because of the uncertainty in the world, a utility agent chooses the action that maximizes the expected utility. A utility function maps a state onto a real number which describes the associated degree of happiness.It can be har to build a utility-based function.

robotic paradigm

In robotics, a robotic paradigm is a mental model of how a robot operates. A robotic paradigm can be described by the relationship between the three basic elements of robotics: Sensing, Planning, and Acting. It can also be described by how sensory data is processed and distributed through the system, and where decisions are made. - "Ett tankesätt"

disadvantages with reactive paradigm

Interacting behaviors can be unpredictable "fast, cheap and out of control..." (Rodney Brooks) - Low level intelligence No memory: - Can't handle sequences of behaviors in a smart way that depends on what previously worked and not - Can't learn anything No world representations: - Doesn't know where it is - Often more art than science

Logical thinking

Logical thinking is the act of analyzing a situation and coming up with a sensible solution. Similar to critical thinking, logical thinking requires the use of reasoning skills to study a problem objectively, which will allow you to make a rational conclusion about how to proceed.

physical structure of robots

Robot anatomy deals with the study of different joints and links and other aspects of the manipulator's physical construction.

reactive paradigm

Sensor data is immediately transferred into actions. Coordination of several behaviors are active at the same time. Each behavior... o is implemented as reactive rules, mapping stimuli to response without thinking or computing o has no (or little) memory o computes responses continuously Ett inbyggt beteende, som INTE involverar planering. This paradigm is based on biology (e.g insects). "Reflexivt beteende typ" Ex. en robot som utför en handling, och sen krockar den i en vägg. Handling först, sense sen.

simple vs model based agents

Simple vs. Model-based Agents: Simple och model-based agents är övergripande beskrivningar. T.ex. en reactive agent är en simple agent. - Simple reflex agents: These agents only succeed in the fully observable environment.The Simple reflex agent does not consider any part of percepts history during their decision and action process. The Simple reflex agent works on Condition-action rule, which means it maps the current state to action. Great for fully observable scenarios. Negative aspects of simple agents: - Very limited intelligence. - No knowledge of non-perceptual parts of state. - Usually too big to generate and store. - If there occurs any change in the environment, then the collection of rules need to be updated. - model-based: Model-based agents keep an internal representation of the state of the world, and use it together with the percept to determine the correct action. Example. robot vacuum cleaner. ● To determine the state of the world, they are gifted with a model: information of how the environment evolves around the agent+information of how the agents' actions modify the world. Great for known, partially observable environments.

schema theory

Simply put, schema theory states that all knowledge is organized into units. Within these units of knowledge, or schemata, is stored information. A schema, then, is a generalized description or a conceptual system for understanding knowledge-how knowledge is represented and how it is used. According to this theory, schemata represent knowledge about concepts: objects and the relationships they have with other objects, situations, events, sequences of events, actions and sequences of actions A simple example is to think of your schema for a dog. Within that schema you most likely have knowledge about dogs in general (bark, four legs, teeth, hair, tails) and probably information about specific dogs, such as collies (long hair, large, Lassie). You may also think of dogs within the greater context of animals and other living things; that is, dogs breathe, need food, and reproduce. Depending upon your personal experience, the knowledge of a dog as a pet (domesticated and loyal) or as an animal to fear (likely to bite or attack) may be a part of your schema. And so it goes with the development of a schema. Each new experience incorporates more information into one's schema.

spatial reasoning

Spatial reasoning is a category of reasoning skills that refers to the capacity to think about objects in three dimensions and to draw conclusions about those objects from limited information. Someone with good spatial abilities might also be good at thinking about how an object will look when rotated.

teleoperation

Teleoperation indicates operation of a machine at a distance. We have sensory feedback to the operator Examples: - Military drone strikes - Medical applications (e.g. endoscopic surgery) - Space robots (e.g. Lunokhod 1-rover) - Some internal closed loop (e.g. height and path)

the decision problem

The decision problem is a complex one involving many tradeoffs and careful reading of guidebooks. Now, suppose the agent has a nonrefundable ticket to fly out of Bucharest the following day. In that case, it makes sense for the agent to adopt the goal of getting to Bucharest. Courses of action that don't reach Bucharest on time can be rejected without further consideration and the agent's decision problem is greatly simplified. Goals help organize behavior by limiting the objectives that the agent is trying to achieve and hence the actions it needs to consider.

classic problems - homunculus problem

The homunculus argument is a fallacy whereby a concept is explained in terms of the concept itself, recursively, without first defining or explaining the original concept. This fallacy arises most commonly in the theory of vision."It is a unexplained mechanism that solves the problem"

actuators

The part of the robot uses to interact with its surroundings (e.g. wheels, legs, arms) An actuator is the actual mechanism that enables the effector to execute an action. Actuators are the devices that actually move the robot joints. And there are a number of different types of actuators in common use for robotics. A good definition of an actuator is it is a device that causes motion and it can cause linear motion or rotary motion.

sensor

The part of the robot uses to sense its surroundings (e.g. camera, infrared sensors, position sensors, gyroskop) Robotic sensors are used to estimate a robot's condition and environment. These signals are passed to a controller to enable appropriate behavior. Sensors in robots are based on the functions of human sensory organs. Robots require extensive information about their environment in order to function effectively.

symbol grounding problem

The symbol grounding problem explores semantics and how symbols relate to the real world. With no human-interpreter within a system, the internal use of symbols cannot be mapped to the appropriate relations in the outside world. How can we make that meaning intrinsic to the system?

innate releasing mechanisms

The term "innate releasing mechanism" belongs to a classical concept of ethology. It refers to a neural sensorimotor interface that mediates between a key stimulus and the adequate action pattern. An IRM thus has stimulus recognition and localization properties at its input side and behavior-releasing properties at its output side. The IRM should allow an animal to recognize and respond to a behaviorally relevant object that the animal had never encountered before. The term "innate," however, is controversial since it may have different meanings, such as present at birth, a behavioral difference caused by a genetic difference, adapted over the course of evolution, unchanging throughout development, shared by all members of a species, present before the behavior serves any function, not learned.

diversity compliance trade off

Vad man vet vs. göra något nytt* Almost all notions of intelligence introduce a trade - off between doing what you know and doing something new. This tradeoff has many names, and we will run into several of them during this course. The underlying common theme involves "coming up with something new" and "ability to adapt oneself adequately to new situations in life" [Pinter]. The key point is generating diversity while complying with the givens. We call this the diversity compliance trade-off

remote control

We have visual contact, but no sensory feedback. - Control commands are sent via wire or radio. - Not only toys, also heavy industrial machines

agent program

säger till agenten att faktiskt göra vad funktionen säger

depth perception

the algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking.

fully observable problem

● "A task environment is fully observable if the agents' sensors detect all aspects that are relevant to the choice of action. E.g. A thermostat has one task and does always observe the temperature in the room. "

sequential problem

● "In a sequential environment, a decision can affect future decisions (Domino effect).The agent may still ignore previous states in its decision (reactive agent). Can be both continuous (driving) and discrete time (chess)"

mapping

● : A cognitive map is a spatial representation of the outside world that is kept within the mind. Artificial intelligence mapping not only accurately maps assorted data sources to the target fields, but also maintains data integrity to radicalize decision-making and completely change the way you do business. AI mapping makes data mapping fast and accurate.

Deterministic vs. Stochastic

● A deterministic environment can 100% predict the future/next steps. The next state can be completely determined based on the previous state and the agent's action is deterministic. Eg. A sudoku-solving agent"Nästa handling kan med 100% säkerhet förutspås utifrån tidigare handlingar" ● Stochastic environments can't be perfectly predicted, but can still be modeled using models that include uncertainty. Note that stochastic is not the same as completely random, a stochastic process depends both on the previous state and a random element. Eg. An autonomous vehicle (pedestrians modify the environment unpredictably)."Det går inte att förutspå kommande handling då oväntade saker kan ske i the environment"

episodic vs sequential

● An episodic task environment is a series of independent tasks. What happens in an episode does not affect the next ones. In each episode, the agent receives a percept and performs an action in response. Each episode could be discrete or continuous. Eg. An agent that looks at X-rays to detect possible sickness. One image has nothing to do with the next. "Agentens handlingar är oberoende av varandra" ● In a sequential environment, a decision can affect future decisions (Domino effect). The agent may still ignore previous states in its decision (reactive agent). Can be both continuous (driving) and discrete time (chess)"Agentens handlingar påverkar varandra, en sak leder till en annan." Therefore: Episodic environments are much simpler than sequential environments because the agent does not need to think ahead.

Classification

● Classification is a process related to categorization, the process in which ideas and objects are recognized, differentiated and understood.

planning

● Cognitive planning is one of the executive functions. It encompasses the neurological processes involved in the formulation, evaluation and selection of a sequence of thoughts and actions to achieve a desired goal.

computer vision

● Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human visual systemcan do.

deterministic problem

● Deterministic problem: In which all the information required to solve the problem is available and, therefore, the effect of any variable can be computed with certainty.

Discrete vs. Continuous

● Discrete environments have a limited [although arbitrarily large] set of possibilities/moves that can drive the final outcome of the task. E.g. A sudoku grid has a limited number of possible states; and a game of chess. "Finns ett begränsat antal möjliga handlingar, begränsad environment" ● Continuous environments cannot be divided into separate states. The environment may change continuously (weather conditions),the agent may be receiving percepts continuously (infrared sensor) and/or acting in a continuous manner. E.g Taxi driving. There could be a route from to anywhere to anywhere else. Self Driving cars operate in a continuous AI environment, continuous updates are therefore required. "Finns oändligt många handlingar, då the environment ständigt är föränderlig"

single agent vs multi-agent

● In a single agent environment, there is a single agent. "En agent som bara tar hänsyn till sig själv, ingen annan agent" ● In a multi-agent environment, there are several agents. Multiple agents depend on each other's actions - do not vacuum twice. Eg. A card game or a fleet of cleaning robots. "Flera agenter som är involverade och beroende av varandras handlingar". Types of multi-agents: ○ Cooperative (actions can increase all agents' performance) ○ Competitive/selfish (all performances cannot be increased at the same time) ○ or a combination of both.

definition of intelligence

● Intelligence: 1. The ability to carry out abstract thinking 2. The capacity to learn or profit from experience 3. The ability to adapt oneself adequately (enough) to relatively new situations in life Intelligence has been defined in many ways: the capacity for logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. More generally, it can be described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.

Machine learning

● Maskininlärning är ett område inom artificiell intelligens. Det handlar om metoder för att med data "träna" datorer att upptäcka och "lära" sig regler för att lösa en uppgift, utan att datorerna har programmerats med regler för just den uppgiften.

one shot problem

● One-shot learning is a classification task where one, or a few, examples are used to classify many new examples in the future. This characterizes tasks seen in the field of face recognition, such as face identification and face verification, where people must be classified correctly with different facial expressions, lighting conditions, accessories, and hairstyles given one or a few template photos.(One-shot learning is when young children observe what you do just a few times and then start practicing it by themselves immediately. This is programmers greatest dream, to make a computer do this without training it with millions of examples first )

sense reason act loop

● SENSE: The robot needs the ability to sense important things about its environment, like the presence of obstacles or navigation aids. What information does your robot need about its surroundings, and how will it gather that information? ● PLAN/REASON: The robot needs to take the sensed data and figure out how to respond appropriately to it, based on a pre-existing strategy. Do you have a strategy? Does your program determine the appropriate response, based on that strategy and the sensed data? ● ACT: Finally, the robot must actually act to carry out the actions that the plan calls for. Have you built your robot so that it can do what it needs to, physically? Does it actually do it when told?

stochastic problem

● can't be perfectly predicted, but can still be modeled using models that include uncertainty. Note that stochastic is not the same as completely random, a stochastic process depends both on the previous state and a random element. Eg. An autonomous vehicle (pedestrians modify the environment unpredictably)."


Conjuntos de estudio relacionados

NURS 2207- Oncology Evolve EAQ (Graded)

View Set

Social Psychology Test 1- Chapter 4

View Set

N257 Jeopardy and quiz questions

View Set

MICRO mastering infectious diseases

View Set

Qualified Plans and Federal Tax Considerations for Life Insurance and Annuities

View Set

Multiplying Polynomials and Simplifying Expressions

View Set