Artificial Intelligence

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Turing Complete (TC)

A system that can support a Universal Turing Machine. Every TC system can emulate any other TC system. It turns out to be easy to demonstrate that neural networks are Turing Complete, so they can be used to do computation. In computability theory, a system of data-manipulation rules is said to be TC or computationally universal if it can be used to simulate any single-taped Turing machine.

Logical Neurons

Also known as "Artificial Neurons" a mathematical function conceived as a model of biological neurons. Artificial neurons are the constitutive units in an artificial neural network. Idea from McCulloch and Pitts (1943) paper titled ``A logical calculus of the ideas immanent in nervous activity'', a careful and well argumented selection of simplifications of the behavior of real neurons. The artificial neuron receives one or more inputs (representing dendrites) and sums them to produce an output (or activation) (representing a neuron's axon).

Computer Science

Became an academic field in its own right in the 1950s and rapidly chewed through many of its new questions. Alan Turing proposed his now-famous "Imitation Game," which has evolved into the Turing Test for artificial intelligence. Artificial Intelligence was coined as a term in 1956. "Machines will be capable, within twenty years, of doing any work a man can do." -Herbert Simon, 1965.

Universal Turing Machine

First mathematical breakthrough for the disciplines of Computer Science and Artificial Intelligence (1936). A theoretical construct proposed by Alan Turing that explained how every computer was like every other computer (following a list of rules). UTM is not a physical machine but a thought experiment about computation i.e. paper tape of infinite length. If given long enough tape + memory + heads, it can perform any computation from special movie effects to (eventually) the operations of a human brain. Paved the way to talking about the behavior of algorithms without worrying about the hardware they were implemented on.

Artificial Neural Network

One of the tools developed during AI's heyday by McCullogh and Pitts which is made of logical neurons. An artificial neural network is an interconnected group of nodes, akin to the vast network of neurons in a brain. In the slide diagram, each circular node represents an artificial neuron and an arrow represents a connection from the output of one neuron to the input of another i.e. "A, Not B". It turns out to be easy to demonstrate that neural networks are Turing Complete, so they can be used to do computation. Modern computational neuroscience imposes various limits on itself to construct "plausible" network models of brain circuits.

Parallel Processing

Although signals flow from neuron to neuron, neurons throughout the brain fire continuously. This approach is called parallel processing: "Do everything at once and put things together on the fly."(In the spirit of the UTM, each kind of processing can, in principle, emulate the behavior of the other). Advantages of parallel processing 1) Far better at taking information from lots of inputs at once. 2) Robust against the failure of individual processors. 3) Multitasking comes for free (which keeps your heart beating). 4) Fast for most problems due to the lack of bottlenecks. Serial processing is very bad at "thinking like a parallel process." Recent computers use "GPUs" to make graphics processing more parallel precisely because modern high-end graphics are so laborious that they threaten to completely clog the CPU. The advent for "multi-core CPUs" generalizes this strategy.

Representations

Cognitive theory sits on the foundation that the mind makes use of representations, an approach that has been very successful. Most cognitive theory is, however, speculative about the precise code the brain uses to represent things. For example, we have no idea how the brain stores integers. The ambitious AI researcher cannot handwave about the code. If the representational forms are specified by the designer, the program will likely be overly rigid. If the forms is instead devised by the program, the designer must still provide the "meta-form" for how to represent a representation.

Artificial Intelligence

Considered a "moving target" because "As soon as it works, no one calls it AI any more."-John McCarthy who coined the phrase "artificial intelligence" in 1956. AI was a dream born of two mathematical breakthroughs (UTM and Information Theory). The first wave of artificial intelligence research discovered that "thinking" and "learning" are actually rather complicated. In computer science, the field of AI research defines itself as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal.Colloquially, the term AI is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving" (known as Machine Learning).

Turing Test

Developed by Turing in 1950, of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.. A proposed way of dealing with the question whether machines can think. According to Turing, the question whether machines can think is itself "too meaningless" to deserve discussion. However, if we consider the more precise question whether a digital computer can do well in a certain kind of "Imitation Game" then—at least in Turing's eyes—we do have a worthwhile question."The Turing Test" is sometimes used more generally to refer to some kinds of behavioral tests for the presence of mind, thought, or artificial intelligence in supposed minded entities.

Natural Language Processing

Early AI's most dramatic failure was natural language processing. From the start, language seemed to offer a way in for AI research. Many theorists and philosophers had already speculated that language could be the basis for all thought. LISP (the second oldest high-level programming language) was designed to permit the recursive relationships common to both linguistic and mathematical expressions. The intuition was that a machine that could manipulate language could also manipulate meaning. Ex) "Time Flies Like an Arrow" Poem: demonstrates the difficulties of natural language processing. Such a sample is especially hard because it is deliberately ambiguous. "Strong" AI would need to recognize the ambiguity, and understand that the ambiguity is deliberate and need not be resolved.

Algorithms

Essentially a computational recipe that, given some inputs, eventually yields an output. Universal computation makes is possible to study algorithms in a general way, without worrying about what hardware runs them. Studying algorithms formally (that is, without reference to the real-world computing machine crunching the numbers) reveals interesting behaviors and limitations that apply in all cases i.e. sorting a deck of cards. Without a vast store of semantic knowledge, algorithms had great difficulty with ambiguity.

Reinforcement Learning

Many problems posed by Big Data are difficult to solve at all. Algorithms that learn from experience and make decisions based on incomplete data are called reinforcement learning algorithms. Because of the rise of RL, computer scientists, neuroscientists, and psychologists are beginning to speak to one another using a common language.The first wave of artificial intelligence research discovered that "thinking" and "learning" are actually rather complicated. Reinforcement learning has thus come full circle to once again being useful for the study of the brain and mind.

Semantic Content

Meaning behind an idea. Researchers almost immediately discovered that semantic content was extremely difficult to work with computationally because 1) trying to digitally encode all of the different meanings and implications of an idea grew unwieldy. It seemed as though computers lacked the memory to store all the different relations between words.2) Trying to extract meaning from existing sentences proved equally difficult for algorithms to do. Without a vast store of semantic knowledge, algorithms had great difficulty with ambiguity.

Information Theory

Second mathematical breakthrough for the disciplines of Computer Science and Artificial Intelligence (1948). New branch of mathematics proposed by Claude Shannon who developed a "Mathematical Theory of Communication" that systematized the transfer of information between systems. This framework provided a rigorous definition for the measuring of information and entropy inherent in any signal. It also showed how you can translate any signal into another symbolic system, including analog-to-digital conversion. Information Theory has an inherent encoding weakness given that the transformation of a signal from one medium to another results in lost information.

Machine Learning

Sole aim: To reasonably solve difficult problems of practical interest. The field does not concern itself with "simulating the mind." It recognizes that human minds aren't very good role models for some sorts of tasks. ML combines statistical inference with colossal amounts of data i.e. machine translation of book edition patterns >This can all be done without any semantic understanding of the content of the books, much less a model that resembles human semantics. The field of machine learning arose out of the desire to just solve problems as efficiently as possible. Rose through the academic ranks to become the dominant paradigm after artificial intelligence. From Google ads to Amazon recommendations to NSA behavioral profiling, machine learning is the algorithmic side of Big Data.

Serial Processing

The CPU of a modern computer does one calculation at a time, but it does calculations very, very quickly. This approach is called serial processing: "Do calculations as a series of consecutive operations." This is not how the brain works. ADVANTAGES of SP 1) Much simpler to implement and maintain the hardware 2) Much easier to analyze the behavior of algorithms 3) Writing software for serial systems is far more intuitive 4)Naturally, serial processing is where computer science starts. Serial processing is very bad at "thinking like a parallel process."


Set pelajaran terkait

Theories of human motivation and behavior

View Set

Chapter 31: Skin Integrity and Wound Care

View Set

Real Estate Brokerage: Unit Exams

View Set

Power Engineering 4th Class Chapter 82

View Set

Pos psych - Book notes Ch. 7 (pg. 267 - 301)

View Set

Google Data Analytics - Process Data from Dirty to Clean - Course 4

View Set

The Rowlatt Acts and the Amritsar Massacre

View Set