Artificial Intelligence Old

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

computational learning theory

a branch of theoretical computer science that is the mathematical analysis of machine learning algorithms and their performance

Neural networks or Artificial neural network (ANNs)

a family of models inspired by biological neural networks (the central nervous systems of animals, in particular the brain) which are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown; these are essentially simple mathematical models defining a function \textstyle f : X \rightarrow Y or a distribution over \textstyle X or both \textstyle X and Y, but sometimes models are also intimately associated with a particular learning algorithm or learning rule

"intelligent" machine

a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at an arbitrary goal

a training example

a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal)

backpropagation

an abbreviation for "backward propagation of errors", is a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of a loss function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the loss function. Backpropagation requires a known, desired output for each input value in order to calculate the loss function gradient. It is therefore usually considered to be a supervised learning method, although it is also used in some unsupervised networks such as autoencoders. It is a generalization of the delta rule to multi-layered feedforward networks, made possible by using the chain rule to iteratively compute gradients for each layer. Backpropagation requires that the activation function used by the artificial neurons (or "nodes") be differentiable.

supervised learning algorithm

analyzes the training data and produces an inferred function, which can be used for mapping new examples

neural net research

attempts to simulate the structures inside the brain that give rise to this skill

knowledge representation and knowledge engineering

central to AI; represented by ontologies

training data

consist of a set of training examples

natural language processing

gives machines the ability to read and understand the languages that humans speak

reinforcement learning

machine learning area inspired by behaviorist psychology, concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.

opponents to symbolic approach

roboticists such as Rodney Brooks, who aims to produce autonomous robots without symbolic representation (or with only minimal representation) and computational intelligence researchers, who apply techniques such as neural networks and optimization to solve problems in machine learning and control engineering.

machine perception

the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others more exotic) to deduce aspects of the world

combinatorial explosion

the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size

regression

the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change

Symbolic artificial intelligence

the collective name for all methods in artificial intelligence research that are based on high-level "symbolic" (human-readable) representations of problems, logic and search. Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the late 1980s

network in ANN

the inter-connections between the neurons in the different layers of each system. An example system has three layers. The first layer has input neurons which send data via synapses to the second layer of neurons, and then via more synapses to the third layer of output neurons. More complex systems will have more layers of neurons, some having increased layers of input neurons and output neurons. The synapses store parameters called "weights" that manipulate the data in the calculations.

Supervised learning

the machine learning task of inferring a function from labeled training data; includes classification and numerical regression

Unsupervised learning

the machine learning task of inferring a function to describe hidden structure from unlabeled data; the ability to find patterns in a stream of input

semantic indexing

A common method of processing and extracting meaning from natural language

ontology

A representation of "what exists"; the set of objects, propoerties, categories, relations, concepts and so on that the machine knows about

3 types of parameters of ANNs

An ANN is typically defined by three types of parameters: 1. The interconnection pattern between the different layers of neurons 2. The learning process for updating the weights of the interconnections 3. The activation function that converts a neuron's weighted input to its output activation.

specific sub-problems of AI

Deduction, reasoning, problem solving; Knowledge representation; Planning; Learning; Natural language processing (communication); Perception; Motion and manipulation

upper ontologies

The most general ontologies; the ontologies that attempt to provide a foundation for all other knowledge

training of neural networks

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[155][156] and was introduced to neural networks by Paul Werbos

computer vision

ability to analyze visual input

main categories of networks

acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events)

how reinforcement learning differs from standard supervised learning

correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected. Further, there is a focus on on-line performance, which involves finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).

stochastic

describes events or systems that are unpredictable due to the influence of a random variable

embodied agent approaches

emphasize the importance of sensorimotor skills to higher reasoning

most successful form of symbolic AI

expert systems, which use a network of production rules. Production rules connect symbols in a relationship similar to an If-Then statement. The expert system processes the rules to make deductions and to determine what additional information it needs, i.e. what questions to ask, using human-readable symbols.

"sub-symbolic" problem solving

fast, intuitive judgements that humans use to solve most of their problems rather than the conscious, step-by-step deduction that early AI research was able to model

statistical approaches to AI

mimic the probabilistic nature of the human ability to guess

Among the most popular feedforward networks

perceptrons, multi-layer perceptrons and radial basis networks

central problems (or goals) of AI

reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects

long term goals in AI

social intelligence, creativity, and general intelligence

few selected subproblems of computer vision

speech recognition, facial recognition, and object recognition

Machine learning

the study of computer algorithms that improve automatically through experience

AI

the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

classification

used to determine what category something belongs in, after seeing a number of examples of things from several categories


Set pelajaran terkait

Chapter 2 What Is A Work Of Art?

View Set