Artificial Intelligence and Neurocognition Review

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

What are degrees of freedom in robotics? Give an example.

A degree of freedom refers to a something's ability to move in a single independent direction of motion The independent parameters that define a system's configuration are referred to as the robot's or the effector's degrees of freedom • For example, a drone has six degrees of freedom: three (x, y, z) for its location in space, and three for its angular orientation (yaw, roll, pitch)

Explain the difference between feedforward and feedback motor control systems. When can you use which? Discuss a disadvantage of feedforward control systems.

A feedforward motor control system sends a signal from the (human or robotic) motor planning component to the relevant motor component using predetermined parameters, executing said action. Information from the environment can be considered only before execution begins, which makes feedforward control suitable for predictable environments. In contrast, a feedback motor control system incorporates information from itself or the environment (feedback) more or less continuously to modulate the control signal. In this way, the system can dynamically alter its behavior in response to a changing environment. Disadvantages: Feedback processes, in turn, provide excellent accuracy-often at the cost of speed. Higher level, goal-directed action planning, such as planning to make pancakes would be impossible to plan in a completely feedforward fashion: it would require all motor parameters to be specified a prior, and thus would require exact knowledge of the position and the position and the properties of all necessary equipment and ingredients, such as weight, friction coefficients, etc. Feedforward or simple reactive control architectures make for very brittle behavior: even complex, carefully-crafted sequences of actions and reactions will appear clumsy if the environment suddenly presents an even slightly novel situation.

What are the algorithms of both chaining directions?

Algorithmic form Forward chaining: - Until no rule produces a new assertion or the animal is identified: For each rule: Try to support each of the antecedents by fact matching If all antecedents are supported, assert the consequent unless there is an identical assertion already Algorithmic form Backward Chaining: - Until all hypotheses have been tried or until animal is identified: For each hypothesis: For each rule whose consequent matches the current hypothesis: Try to support each of the antecedents by fact matching or by backward chaining through another rule If all antecedents are supported, conclude hypothesis to be true

• What is an expert system and what are its components and their functions?

An Inference Engine is a tool from artificial intelligence. The first inference engines were components of expert systems. The typical expert system consisted of a knowledge base and an inference engine. The knowledge base stored facts about the world. The inference engine applied logical rules to the knowledge base and deduced new knowledge. This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine. Inference engines work primarily in one of two modes: forward chaining and backward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved Expert systems: MYCIN • A system that emulates the decision-making ability of a human expert • Example: MYCIN (Stanford, 1970s) was designed to diagnose and recommend treatment for certain blood infections • Simple if-then rules with certainty factors As new rules were acquired from the collaborating experts, it became apparent that MYCIN would need a small number of rules that departed from the strict modularity to which we had otherwise been able to adhere. For example, one expert indicated that he would tend to ask about the typical Pseudomonas-type skin lesions only if he already had reason to believe that the organism was a Pseudomonas. If the lesions were then said to be evident, however, his belief that the organism was a Pseudomonas would be increased even more. A rule reflecting this fact must somehow imply an orderedness of rule invocation; i.e., "Don't try this rule until you have already traced the identity of the organism by using other rules in the system." Our solution has been to reference the clinical parameter early in the premise of the rule as well as in the action, for example: RULE040 IF: 1) The site of the culture is blood, and 2) Thei dentity of the organismm ayb e pseudomonaansd, 3) Thep atient has ecthymag angrenosusmki n lesions THEN: Thereis stronglys uggestiveev idence(. 8) that the identity of the organismis pseudomonas Note that RULE040i s thus a member of both the LOOKAHEApDro perty and the UPDATED-BpYro perty for the clinical parameter IDENT. Rules • MYCIN reached an accuracy of ~69%, which was better than doctors at Stanford Medical School • However, it was never used in practice due to ethical and legal difficulties • More on symbolic AI in lecture 2...

What is an expert system and what are its components and their functions?

An expert system represents an expert's understanding of a subject • It consists of two main components: the knowledge base, and inference engine Knowledge base e.g. breast_cancer is a disease Inference engine e.g. predicate logic Data e.g. Jane Smith has breast_lump Expert systems: knowledge base • The knowledge base represents facts about the world and about the way in which concepts are related to each other, often using predicate logic - Leukemia is a disease - Chemotherapy is a treatment - John Smith is a patient • But also more complex relationships - Leukemia is an abnormality of blood - Anemia is a side effect of chemotherapy Expert systems: inference engine • The knowledge base with its symbolic descriptions is not enough, we need to be able to manipulate the symbols • Not only propositional logic (if A then B, aka antecedent-consequent rules), but also predicate logic to connect symbols (A is a form of B) - Patient X suffers from leukemia (fact 1) - Leukemia is an abnormality of blood (fact 2) - If patient suffers from an abnormality of blood then schedule weekly blood test (rule 1) - Patient X should be scheduled for a weekly blood test (deduced conclusion) Expert systems: complex rules 1. If anemia is present then iron supplement is required 2. If anemia is present and spleen is enlarged then investigation for Hodgkin's disease is desirable • If the patient's data satisfies rule 2, then rule 1 will also be satisfied • So should we give iron supplements? Or check for Hodgkin's? • We need more control over which rules to apply

• Explain the difference between symbolic AI and connectionist AI.

Approaches to AI: - Symbolic AI: intelligent behavior through manipulation of symbols • Symbolic AI (GOFAI) does not concern itself with neurophysiology • Human thinking is a kind of symbol manipulation - IF (A > B) AND (B > C) THEN (A > C) • Intelligence is thought of as symbols and the relations between them • Knowledge-based, or expert systems were hugely successful - Connectionist AI: representations in the brain are distributed, processing massively parallel Connectionist AI systems are large networks of extremely simple numerical processors, massively interconnected and running in parallel.

What is the difference between depth-first search and breadth-first search? Which type should be used under what circumstances? Explain.

Basic search: depth-first search • One way is to iteratively pick one of the children at every node visited and work forward • Other alternatives are ignored as long as there is a chance of reaching the goal • When a dead end is reached, the algorithm backtracks to the previous choice point • This algorithm is known as depth-first search or DFS Basic search: breadth-first search • Alternatively, we can check all paths of a given length before moving on to the next level • This algorithm is known as breadth-first search or BFS Basic search: what algorithm to use? • Deciding what algorithm to use depends on the problem set • Depth-first search is preferred when: - You are confident that complete paths or dead ends are found after a reasonable amount of steps • Breadth-first search is preferred when: - You are working with very deep trees - But not when you think that all paths reach the goal at about the same depth (wasteful)

How does a heuristically informed algorithm differ from its naive counterpart?

Basic search: what algorithm to use? • What if you don't know much about the problem set? • Another option is nondeterministic search • Nodes are expanded at random • This way, you won't get stuck chasing too many branches or levels Basic search: hill climbing • Luckily, with more information we can do better • Algorithms that use such information are said to be heuristically informed • For example, when we know the distance-to-target for each node we can make use of that information • Hill climbing is an algorithm that is similar to depth-first search, but orders possible options based on heuristics Basic search: beam search • We can modify breadth-first search in the same manner: beam search • Beam search performs breadth-first search, moving downward only through the best w nodes Beam search uses breadth-first search to build its search tree. At each level of the tree, it generates all successors of the states at the current level, sorting them in increasing order of heuristic cost.[2] However, it only stores a predetermined number of best states at each level (called the beam width). Only those states are expanded next. The greater the beam width, the fewer states are pruned. With an infinite beam width, no states are pruned and beam search is identical to breadth-first search.

• What are the similarities between connectionist AI architectures and the human brain?

Connectionist AI • After the AI winter of the 70s, connectionism was revived • Rumelhart & McClelland: the PDP research group • Parallel distributed processing, connectionism, artificial neural networks Connectionist AI • Biologically inspired - Connectionism is based on the structure of the human brain • Lesion tolerant - Lesioned or damaged networks can still process information • Capable of generalization - ANNs are capable of learning, and are able to generalize rules to novel input Connectionist AI: biologically inspired • Relatively simple units • Neurons receive input through dendrites • Neurons send output through axon • Highly connected (1000s of synapses on axon and dendrites) • 20×109 neocortical neurons, 15×1013 cortical synapses • Computation is massively parallel Connectionist AI: principles • Mental states are represented as N-dimensional vectors of numeric activation values over neural network units • Memory is created by modifying the connection strength (weight) between units Connectionist AI • Solve complex, non-linear or chaotic classification problems • No a priori assumption about problem space or statistical distribution • Artificial neural networks can compute any computable function (remember McCulloch & Pitts!) • Pattern recognition

• What does it mean for human memory to be content-addressable?

Data in a connectionist system is stored in what is called content addressable memory (Bechtel & Abrahamsen, 2002). In content addressable memory, information is not retrieved by knowing a (content-less) address, but instead by using some of the content as a cue to retrieve the remainder of the information. For instance, using a single fact as a cue can retrieve other related facts (i.e., activities of units) that results in a whole pattern of information being reconstructed. This type of memory has the advantage of allowing greater flexibility of recall and is more robust. This distributed memory is able to work its way around errors by reconstructing information that may have been lesioned from the system. Such memory systems can also be used to model many of the charateristics of human memory (e.g. Hinton & Anderson, 1981

• When should you use forward chaining, and when backward chaining?

Expert systems: chaining directions • So how do we decide which chaining direction to use? • Many systems can be implemented in both directions • Backward chaining is helpful in cases where not all facts are known yet - When investigating the carnivore hypothesis, questions about teeth are asked, but irrelevant questions such as color are skipped • Forward chaining is helpful when you want to know everything you can from a set of facts - When seeing an animal quickly, you want to know what you can deduce from your observation

Explain how a conclusion is deduced in an expert system using forward and backward chaining. What are the algorithms of both chaining directions?

Expert systems: controlling knowledge use • There are three main ways to control the use of knowledge when solving problems - Forward chaining - Backward chaining - Control knowledge Example expert system: Zookeeper • Let us imagine a robot that can perceive basic features like color and size, but has no initial knowledge of animal species • Its creators want to equip it with an expert system so the robot can classify animals • Zookeeper allows us to determine an animal species based on observed characteristics • It consists of 15 rules, represented in its knowledge base Expert systems: forward chaining • Works sequentially from given statements to a deduced conclusion • Algorithmic form: - Until no rule produces a new assertion or the animal is identified: For each rule: Try to support each of the antecedents by fact matching If all antecedents are supported, assert the consequent unless there is an identical assertion already Expert systems: forward chaining • Our robot encounters a new animal, Scruffy • Scruffy has hair, chews cud, has long legs, a long neck, a tawny color, and dark spots • Because Scruffy has hair, rule 1 fires. Now we know that Scruffy is a mammal • Because Scruffy is a mammal and chews cud, rule 8 fires: Scruffy is an ungulate • All antecedents for rule 11 are satisfied: Scruffy is a giraffe! Expert systems: backward chaining • Inference engine starts at the opposite end of the logical process • It starts by forming a hypothesis and using if-then rules to work backward toward hypothesis-supporting assertions • Algorithmic form: - Until all hypotheses have been tried or until animal is identified: For each hypothesis: For each rule whose consequent matches the current hypothesis: Try to support each of the antecedents by fact matching or by backward chaining through another rule If all antecedents are supported, conclude hypothesis to be true Expert systems: backward chaining • Our robot encounters a new animal, Swifty • Our robot forms the hypothesis that Swifty is a cheetah • Let's run through the process of backward chaining Expert systems: backward chaining • Rule 9 requires that Swifty is a carnivore, has a tawny color, and dark spots • Our robot needs to check if Swifty is a carnivore. Rules 5 and 6 can be used. Assume rule 5 is evaluated first • It then needs to check if Swifty is a mammal; rules 1 and 2 apply. Assume rule 1 is evaluated first • Rule 1 requires Swifty to have hair in order to be a mammal. The robot checks, and this turns out to be the case. Swifty is a mammal, so it can go back to evaluating rule 5 Expert systems: backward chaining • The robot must check whether Swifty eats meat. Swifty isn't eating right now, so we need to abandon rule 5 • The robot should then use rule 6 to determine if Swifty is a carnivore. According to rule 6, Swifty should be a mammal. The robot already established that when checking rule 5's antecedents • It then must check if Swifty has pointed teeth, claws, and forward-pointing eyes. The robot checks, and Swifty has all these features. Swifty is a carnivore • Then it returns to rule 9. Swifty has a tawny color and dark spots. Therefore, the consequent is affirmed and the hypothesis accepted. Swifty is a cheetah!

How does the behavior of two Braitenberg Vehicles differ between straight vs. crossed excitatory connections? Describe their behavior exhaustively.

Fear and aggression • The position of the sensors and their connections to the motors determines behavior • A Vehicle with straight excitatory connections will turn away from the signal and slow down as it gets farther away from it. • A Vehicle with crossed excitatory connections will turn toward it, and approach it increasingly faster

What happens to these two vehicles if we make the connections inhibitory?

Love • Adding inhibitory connections allows for more interesting behavior • Vehicle with straight excitatory connections will orient itself and race toward the stimulus, coming to a stop in front of it • A Vehicle with crossed excitatory connections will orient itself away from the stimulus and speeds up as it gets farther away

• What was Turing's imitation game about and how does this relate to intelligent systems?

Non-sentient AI • Turing's imitation game: a machine is intelligent if we cannot distinguish it from a human in conversation • It makes no claims about the underlying mechanisms The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Alan Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so that the result would not be dependent on the machine's ability to render words as speech.[2] If the evaluator cannot reliably tell the machine from the human (Turing originally suggested that the machine would convince a human 70% of the time after five minutes of conversation), the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give.

The A* algorithm combines three techniques that were discussed in the lectures. Name these three techniques, and how they contribute to efficient path finding.

Optimal search: A* procedure • The A* procedure combines all of our discussed techniques: - branch-and-bound - distance estimates - redundant path removal • It guarantees finding an optimal result efficiently

In optimal search, the branch-and-bound algorithm works by discarding some branches. Explain which branches are discarded by the algorithm and why.

Optimal search: branch-and-bound with more information • We can do better by informing branch-and-bound about the estimated distance left • After all, e(total path length) = d(already traveled) + e(distance remaining) • We then use accumulated distances + estimates to expand nodes Optimal search: redundant paths • We can shorten processing times even further by discarding redundant paths • It should be clear that extending S-A-D is never a good idea, because D can be reached far quicker by S-D

Name two robot learning techniques and explain them.

Robot learning techniques • Motor babbling (Hebbian learning) • Reinforcement learning Motor babbling in humans • Interesting parallels between robots and humans • How do babies learn how to move? • Random movement (M) initially • These movements produce changes in body pose and the environment (registered by K) • Frequent co-activation of M and K will lead to bidirectional associations Motor babbling in robots • Using motor babbling, a robot can develop an internal model of its body and its environment • A related technique (imitation learning) replaces random movements with human-guided movement • The robot can then parameterize the incoming kinesthetic information to build a model of the required movement Reinforcement learning • "Applying a reward immediately after the occurrence of a response increases its probability of reoccurring, while providing punishment after the response will decrease the probability." —Edward Thorndike • We need to send a reinforcement signal to the control system • Often this signal is binary in the form of pass/fail, but more complex numeric evaluation is also possible • Learning (connectionist) neural networks use a form of reinforcement learning, but will be discussed separately in-depth in the coming lectures

Explain why robots need sensors and effectors, and include their definitions.

Robots: sensors • Sensors allow robots to perceive the environment • Commonly found sensors are sonar sensors, shaft decoders, and cameras • Humanoids tend to have force and torque sensors in effectors Robots: effectors • The means by which robots move and change body configuration • The independent parameters that define a system's configuration are referred to as the robot's or the effector's degrees of freedom • For example, a drone has six degrees of freedom: three (x, y, z) for its location in space, and three for its angular orientation (yaw, roll, pitch) Robots: effectors • With a 1-DOF end effector, it's possible to grab objects • With two arms with such end effectors, a robot could open a bottle with a twist cap • In contrast, the human hand alone has 27 DOF • Obviously, it is often not necessary to have that many DOFs

How does Searle's Chinese room argument relate to Turing's imitation game? What conclusions would Searle draw out of a computer passing the imitation game?

Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that he or she is talking to another Chinese-speaking human being. The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?[6][c] Searle calls the first position "strong AI" and the latter "weak AI".[d] Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually. Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted as demonstrating intelligent conversation. However, Searle would not be able to understand the conversation. ("I don't speak a word of Chinese,"[9] he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either. Searle argues that without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false. The experiment aims to show that computers cannot process information or think like human beings. Human thoughts are about things, therefore, they have semantic contents. Computers only process syntax (formal, grammatical structures). Searle argued that syntax does not provide one with semantics for free, concluding that computers cannot think like humans do. The experiment is supposed to show that only Weak AI is plausible; that is, a machine running a program is at most only capable of simulating real human behavior and consciousness. Thus, computers can act 'as if' they were intelligent, but can never be truly intelligent in the same way as human beings are. Weak AI: Chinese Room argument • I am situated in a room containing only a large book and a door under which pieces of paper can be passed • Chinese people on the outside of the room can ask me questions by writing them down and passing pieces of paper under the door • The large book contains every possible question-answer mapping, so I can answer (in Chinese!) all questions correctly • Can computers really understand? Weak AI • Rule-based manipulation of symbols does not constitute intelligence: the inhabitant of the Chinese room does not understand Chinese • Chinese room criticism: AI really is "the ongoing research program of showing Searle's Chinese Room Argument to be false" (Hayes) • No matter how intelligent machine behavior may seem, it does not reflect true intelligence or sentience

What happens when we implement a threshold function?

Special tastes • Thus far, we have only examined Vehicles with linear activation functions between sensors and motors • Nonlinear connections allow for other interesting behavior • Vehicles with threshold functions appear to make decisions • And where decisions are made, must there not be a will to make them? Special tastes • Vehicles with such nonlinear activation functions might show alternating behavior between two stimuli • Orbiting a stimulus • More complex paths

Explain the difference between weak AI and strong AI. What are their assumptions and what does that mean for AI?

Strong thesis: computers can be programmed to think, and human minds are computers that have been programmed by 'biological hardwiring' and experience. Strong AI: correctly written program running on a machine actually is a mind "mind is to the brain as the program is to the computer hardware (...) On this view, any physical system whatever that had the right program with the right inputs and outputs would have a mind in exactly the same sense as you and I have minds" (Searle 1984, p. 28). Suggestion: Computers could be intelligent and have a conscious mind just as ours. They should be able to reason, solve puzzles, make judgments, plan, learn, communicate, etc. Strong AI • "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds" • Strong AI proponents believe that intelligent systems can actually think • Most people believe that strong AI should have a connectionist architecture (later) Strong AI: problems • Can machines think? - Can machines fly? - Can machines swim? • In other words: are we asking the right questions? • Strong AI assumes that the human mind is an information processing system, and that thinking is a form of computing • The mind as an information processor is one of the basic tenets of cognitive psychology Weak AI: machine running a program is at most only capable of simulating real human behavior and consciousness. Machines can act 'as if' they were intelligent. Syntax does not give semantics on its own.

A Sudoku is a puzzle in which the objective is to complete a 9×9 grid with digits, so that each column, each row, and each of the nine 3×3 sub-grids contains all digits from 1 to 9. Explain the backtracking algorithm and give an algorithmic implementation of backtracking in solving a Sudoku.

Sudoku solving: Backtracking • Backtracking works by iteratively generating possible candidate solutions • When it determines that the candidate solution cannot lead to a complete solution, it abandons the candidate solution • It then goes back (backtracks) to the previous choice point and creates a new candidate solution Sudoku solving: Backtracking • Algorithmic form: - Until grid is completed For each empty cell: For (digit in 1 to 9): If it is a valid candidate solution then enter digit and goto next cell (else) Goto previous cell and try next valid digit Sudoku solving: Backtracking - Until grid is completed For each empty cell: For (digit in 1 to 9): If it is a valid candidate solution then enter digit and goto next cell (else) Goto previous cell and try next valid digit Sudoku solving: Backtracking • Assuming a solution exists (and for good Sudoku puzzles it should), the backtracking algorithm guarantees a valid result • For an average Sudoku, backtracking takes about 38,000 iterations • Naive brute force takes 1.09 × 1050 iterations • So backtracking is a powerful alternative to brute force • Backtracking is also used as a technique in other algorithms, as we will see

• Explain the criticism on symbolic AI. Give arguments against and for.

Symbolic AI criticism • Why do we need symbolic representation? It doesn't seem necessary for many behaviors • Brooks: "The need for traditional symbolic representations soon fades entirely. The key observation is that the world is its own best model. It is always exactly up to date. It always contains every detail there is to be known. The trick is to sense it appropriately and often enough." (robotics, lecture 3) • It is unclear how processes like pattern recognition would work in a purely symbolic way • Representations dealing with noisy input are needed Put succinctly, the frame problem in its narrow, technical form is this (McCarthy & Hayes 1969). Using mathematical logic, how is it possible to write formulae that describe the effects of actions without having to write a large number of accompanying formulae that describe the mundane, obvious non-effects of those actions?

Explain how we can make such a Vehicle respond only to certain sequences of input values.

Vehicle 3c: a system of values Not just one pair of sensors but four pairs, turned to different qualities fo the environment, say light, temoperature, oxygen, concentration, and amount of organic matter.' Expected behavior Given appropriate connections, `this is a vehicle with really interesting behavior. It dislikes high temperature, turns away from hot places, and at the same time seems to dislike light bulbs with even greater passion, since it turns towards them and destroys them... You cannot help admitting that Vehicle 3c has a system of VALUES, and, come to think of it, KNOWLEDGE' By building networks of neurons with excitatory and inhibitory connections between them.


Kaugnay na mga set ng pag-aaral

Comprehensive NCLEX Review-mental health

View Set

Chapter 43: Nursing Management: Lower Gastrointestinal Problems, ch.44 liver, pancreas, and biliary tract problems, MedSurg Ch. 61, Lewis Ch. 44, Chapter 43: Nursing Management: Lower Gastrointestinal Problems practice questions, Chapter 59 Chronic N...

View Set

Busc-1B Microeconomics final study guide

View Set

Trauma, Crisis, Disaster, and Related Disorders - Mental Health

View Set

NJ Laws, Rules & Regulations Common to All Lines

View Set

File Systems/File Management Quiz

View Set

7. Motsvarigheter till svenskans "det" (Choose: Write)

View Set