LIN 463 EXAM 1

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Fodor's LOT hypothesis

*Mentalese- manipulation of symbolic representations; in other words, a representation of what is in your head without actual words (our thoughts have syntax) -*Actions that occur preserves semantics (meaning) things happen as they should -Physical structures in the brain carry information and serve as representations of the environment -Representations are transformed to more complex representations psychological states that ultimately produce a behavioral response -The mind receives information about the environment (like light on the retina) but behavior is not completely determined by this information because people respond differently to the same stimulus

Problem solving as search

- generating all the possibilities and searching through all the possible solutions and figure out the best one Specifying a problem requires: 1.Description of current situation 2.Possible operations 3.Goal situation 4.Means of evaluation -Problem solving can't be brute force search- search must be selective -Brute force search: try every possible option, systematically enumerate all possible candidates for the solution. May work for some problems, but not possible with non-finite problem spaces because it will take an extremely long time. -Instead, must use heuristics (rule of thumb) -Problem space - branching tree of achievable situations defined by potential application of operators on the initial situation -Brains are not that fast

*Rodney Brook's Physical Grounding Hypothesis as alternative to PSSH

-**Physical Grounding Hypothesis To build an intelligent system, it is necessary to have representations that are grounded in the world - rather than 'feeding' the system symbols. -The world is its own best symbol- it has every detail that is necessary -Requires building systems in a 'bottom up' fashion (can think of adding layers that build on each other) -*We should not be understanding minds as a top-down perspective → start with tiny pieces and move up, people should start with the environment not symbols PSSH Critique -Relies on symbolic description of the world, but what is that correct description - must be context/task dependent. -Manipulation of complex symbol structures not biologically plausible -Unclear how symbols acquire meaning - symbol grounding problem -Must state (symbolically represent) all changes to the world and non-changes in response to every action - frame problem

massive modularity hypothesis

-*No domain general (central) processing -All information processing is carried out by specialized sub-systems -Unlike fodorean modules, darwinian modules are NOT informationally encapsulated -They are domain specific and perhaps cognitively impenetrable -*Yes they are modules, but have different characteristics to Fodor's modules. Evolutionarily driven General argument for massive modularity: No domain general processing system of the type that Fodor conceives could have been selected bc no such system could have solved the type of adaptive problems that fixed the evolution of the mind

Examples of Early AI systems

-*The Logic Theorist - computer program that was able to do mathematical proofs - treated computer program as a human -The General Problem Solver (GPS) -LISP (List Processing) -*Shakey (robot) programmed to move around a particular environment -*ELIZA - computer psychotherapist "how are you"

intertheoretic reduction

-*Unified theory of everything - nowhere close in cognitive science to this -Response to integration challenge -Integration challenge: difficulty of creating unification between the different domains that make up cognitive science Scientific theories can be reduced to the most fundamental principles; this is an example of bottom up analysis ex) the cell is the building block of all living organisms in biology

Candidate examples of modules (e.g. face, perception, music perception, etc.)

-Color perception -Shape analysis -Analysis of 3-D spatial relations -Visual guidance of bodily motions -Speech perception - grammatical analysis -Face recognition -Music recognition - detecting melodic or rhythmic structures of acoustic arrays

The three level's of Marr's levels of analysis are:

-Computational -Algorithmic -Implementation

Universal Turing Machine

-Computes every effectively calculable number-theoretic function -Mimic any other turing machine — can run the machine table of any Turing machine -Model of how info-processing can take place: purely algorithmic process & even though cells on the tape carry info, it can be manipulated and transformed in a purely mechanical way -Modern computers are like this (UTMs with finite tapes-UTM implies anything that is computable can be represented in a Turing Machine) Machine Table- program specifying appropriate output for given input (Algorithmic Instructions) -How information processing can take place -Turing was against behaviorism -Foundation for theories of modern computation -"Can machines think" - first he devised the turing test (imitation game). If it's impossible to tell the difference 70% of the time between the human and the computer, then that machine can think -Church-Turing theory: A function on natural numbers can effectively be calculated if and only if it can be calculated by a Turing machine. Has not been proven, but is universally accepted by logicians and computer scientists. Anything that can be done in mathematics by an algorithm can be done by a turing machine.

multilayer networks

-Contains multiple layers of units (Input layer, hidden layers, output layer) -Each input unit is connected to each unit in the following layer, and each connection has a weight.

Context effects

-Differences in pronouns cause problems for AI's not programmed with multiple names that are associated with each other (WATSON's struggle with pronouns on Jeopardy) -Multiple words/symbols can have multiple representations based on context -Perceptual information can't be programmed into machines because it is subjective/changes

Harnad and the symbol grounding problem

-Difficult problem because it is easy to start defining solution to problem in terms of other symbols, which leads to an infinite regress Ex: words become meaningful because in thinking about them we attach meaning to them but if thoughts are symbols then how do we attach meaning to thoughts?

Three characteristics of Fodor's modules include:

-Domain-specific -informationally encapsulated -Mandatory -Fast -Cognitively impenetrable

Ebbinghaus

-Estimate forgetting curve -memorizing nonsense and seeing how long it takes to memorize/forget -The more time elapsed since learning, the more people would forget

Alan Turing (1912-1954)

-Father of computer science and the field of artificial intelligence. -Formalized concept of algorithm with the Turing Machine (TM). -TM is the basis for the modern computer

Network models of past tense learning

-First network created by Rumelhart and McClelland by using pattern association. Actually a combination of 3 networks: First one takes the phonological (verbal) representation of the root form and converts it into a mental representation of the root form. Second one is where the learning takes place and the root is converted into the past tense form. Third network then translates the mental representation back into phonemes (speech sounds). The network was initially trained on 10 common verbs, then 410 less common. This network reproduced the over-regularization phenomenon. -Second network created by Plunkett and Marchman. They critiqued Rumelhart's network by saying the over-regularization effect was built into the network because the training set was so expanded after the first round of learning. They produced a network with hidden units in a single hidden layer. This network removes the need to translate back and forth from phonological representations. The training schedule included similar numbers of words per set, half regular and half irregular. This network also showed over-regularization errors.

Use of heuristics to prune search space in problem solving

-Have to close off some branches entirely because they don't make sense -Can't use brute force search -General rules of thumb to allow the computer program to be successful -Makes the search space smaller

NETtalk

-How babbling turns to words due to the number of weights -Boolean function

Problems of intentionaliy

-How do words and thoughts (symbols) connect up (refer to) objects in (and properties of) the world? -Appeal to linguistic behavior - words refer to objects b/c that's how people use them in a language Ex: certain Chinese symbol refers to a table b/c that's how it's used by people who speak Chinese

Types of input in activation function

-Linear function: the more input, the stronger the output (ex. /) -Threshold linear: input must reach a threshold, then the strength of the output increases (ex. __/) -Binary threshold: either fires or does not fire ("on/off switch") (ex. _|---) -Sigmoid function: threshold that below fires weakly, then it increases until it reaches a maximum firing rate (ex. S)

Single layer network

-Mapping function: connects units from one set with exactly one unit in the other set, and they are not connected within sets. -Use binary Boolean functions (ie. take pairs of truth values as inputs then translated into single truth outputs) 1.Has AND, OR, XOR, and NOT functions A.) AND = requires 2 TRUE in order to be TRUE B.) OR = only requires 1 TRUE in order to be TRUE C.) XOR = assigns FALSE to 2 TRUEs (can't be represented by a single layer network) D.) NOT = NOT A is TRUE if A is FALSE, NOT A is FALSE if A is TRUE -Weights are set on each neuronic unit which determine whether a certain unit will reach the threshold using the binary Boolean function

Fodor's modularity hypothesis

-Minds consist of autonomous sub-systems (modules) a. Horizontal vs. Vertical organization -Horizontal organization: Mind organized in terms of general cognitive abilities/capacities a.)Perception b.)Memory c.)Attention -Vertical organization: Mind organized in specialized peripheral system + central systems in the middle Perceptual systems a.)Central processing b.)Motor systems

Basics of how connectionist models work

-Model activity of populations of neurons -Artificial neuron 1.Input, weight attached to input, threshold of neuron, total input to neuron, output signal (faster than PSSH) 2.Neuron will transmit signal if input (weight) exceeds the designated threshold 3.Activation function: function that predicts the output signal based on the total number of inputs

concept of modules vs. central processing

-Modules are autonomous sub-systems of the brain -These modules serve as inputs to central processing -There are corresponding output modules

Marr's level of analysis approach to the integration challenge

-Not doing intertheoretic reduction -Distinguished 3 levels of explanation for an information processing task and gave a general theoretical framework for combining them -Classic example of top-down analysis 1.Top-down b/c of underdetermination. Many different algorithms can, in principle, solve the same problem. There's many ways of implementing a given algorithm 2.Multiple realizability → more informative to work at higher levels 3.Relatively little implementation detail in Marr

Searle's Chinese Room

-Objection to taking the Turing test to be a criterion of intelligence. -By using Chinese symbols, Searle proved that even though people could say correct output, they did not understand it... This lack of understanding was used as critique of PSSH -Difference between intelligent behavior and actual intelligence -Syntactic understanding of Chinese, not Semantic structural understanding (ie. not meaning or definitional understanding). -Basically chinese characters as input and then is supposed to output chinese words. A decoder is supposed to determine if these outputs are correct. The decoder is not a native chinese speaker 1.Critique of strong AI-too optimistic 2.Brings up symbol grounding problem-how do symbols get meaning?

Turing Test

-Operational definition of what it would mean if a machine could think -Imitation game - originally between a male and female, challenge for interrogator to find out which is the male and female through interactions. But the turing test replaces either the male/female with a computer (now it's just human vs. computer) -Machine is intelligent if a person talking to it thinks they're talking to a person, not a machine -Places no constraints on how the machine manages to pass the test (what happens between input and output is completely irrelevant) -No machine has passed it yet

Three advantages that connectionist models have over symbolic AI models are:

-Parallel Processing -Graceful Degradation -Learning

Examples of Connectionist Models (e.g., application to past tense learning).

-Past tense learning 1. Impoverished input problem in language learning 2. Connectionist models make same speech errors as children 3. *Looking at how children conjugate verbs to make them past tense 4. *Nurture side of the debate (while rule-based AI is nature) 5. (3 stages) A.) Stage 1: Correctly conjugates verbs B.) Stage 2: over-regularizes and incorrectly conjugates irregular verbs according to the rules of regular verbs C.) Stage 3: Correctly conjugates early and regular verbs, irregular verbs improve with time, and new verbs acquired are over-regularized

Interactive Activation Competition (IAC)

-Pools of units of each category -Linking pool in the middle -Generalized information beyond the information that was put in -Group of things we clump together when remembering but are not actually mutually consistent

Space of possible responses in regards to the CR

-Reject the intuition that the CR does not understand chinese -Concede that the CR does not genuinely understand chinese but find an alternative explanation of the lack of understanding that does not rule out strong AI -Concede that the chinese room does not genuinely understand Chinese but show how we might build it from the CR system that system, that does understand chinese.

Steps toward Cognitive Science

-Roots of experimental psychology learning - Behaviorism (rise and fall) - Developments in the the theory of computation and information -Development of information-processing models of cognitive capacities and abilities

Strong versus weak AI

-Strong AI- An appropriately programmed AI with right inputs & outputs would have a mind in the sense that humans have minds. 1.Comes from PSSH perspective 2.Searle critiques this: gap between PSSH and minds 3.Computers indistinguishable from human minds-true artificial intelligence -Weak AI- AI that is focused on one narrow mind, and therefore would not actually have minds (useful for testing hypothesis)

Subsumption architectures and robot examples

-Subsumption architecture -Based on physical grounding hypothesis. -Robots situated (embedded) in worlds with sensors. -Close connection between perception and action. -Series of incremental (interactive) layers each connecting perceptions (sensors) with action (motor behavior) -Robot example -Robots interact with the world with sensors in simple ways to bootstrap more complex behaviors -Genghis -Baxter -Herbert -Tom & Jerry -Squirt

Physical Symbol Systems Hypothesis

-The logic theorist -Physical symbol system has the necessary and sufficient means for intelligent action -Necessity: anything capable of intelligent action is a physical symbol system -Sufficiency: any physical symbol system is capable of intelligent action -Human brains are physical symbol systems -4 ideas- 1. Symbols are arbitrary physical patterns 2. Symbols can be combined to form complex 3. symbol structuresSystem contains processes for manipulating complex symbol structures 4. The processes for representing complex symbol structures can themselves be symbolically represented in the system; (In English: you can see how the symbols were combined in the system) -Examples: Formal logic, algebra, digital computer, chess -Is a Reductive characterization of intelligence -Must function algorithmically - not themselves intelligent -"The mind is a computer program" - lead to a number of early AI programs

Issues of combinatorial explosion in search space

-Traveling salesman problem: wants to visit cities in the shortest possible distance. To solve for 50 cities, it would take years because there's so many branches ( too big a problem space → because too many combinations of cities) -Explains why search has to be selective because problem spaces are usually too large to search exhaustively -Have to close off certain branches of the problem space to search effectively

Mentalism

-William James -Dependence on introspection and verbal reports; based on one's own mental and emotional responses -"Psychology is the Science of Mental Life, both of its phenomena and their conditions. The phenomena are such things as we call feelings, desires, cognitions, reasonings, and the like."

Conway's Game of Life

-another example of emergent behavior - example of how you can get emergent behavior from a simple rule-based system -Is a simulation game -Displayed on a large checkerboard -Basic idea 1. Start with one organism with a cell, and observe how it changes when applying Conway's "genetic laws" for births, deaths, and survival.

Basic principle of behaviorism

-appeal to thoughts and desires to explain behavior -use of introspection to study conscious mental experience -dependence on verbal reports

Characteristics of modules

-domain specific -cognitively impenetrable (illusions cannot be affected by our beliefs) -Fast -Informationally encapsulated (can only use information given from inputs) -Mandatory Possible characteristics: -Fixed neural architecture -Specific breakdown patterns

Turing Machine

-formalization of an algorithm -showed decision problem was NOT solvable

Concept of Cognitive Architecture

-how our minds are structured -Simple reflexive agent- Directly link sensory and effector systems. -Goal-based agent- Built up from sub-systems that perform specific information processing tasks. -Learning agent- Starts to act with basic knowledge and then is able to act and adapt automatically through learning. -Subsumption architecture -Function relates to structure -Various types of brain damage may impact function -Transforming info- pattern demo takes info produces output

mental chronometry

-mental processes take time -measure performance through subtracting reaction time in both tasks to see how fast it takes mind to make a decision.

Jennings

-need to expand on behaviorism simply cant look at observations, wants more methodical aspects incorporated as well.

Cognitive Science grew out of these basic ideas:

-organisms pick up and process information about the environment (distal rather than proximal) -information often has hierarchical structure

Behaviorism

-tries to account for all behavior by looking at simple associations

Characteristics that distinguish connectionist models from symbolic AI

1. parallel processing:Process is split up into parts that are executed simultaneously on different processors attached to same computer (as opposed to serial processing, which is one step at a time) 2. Learning(supervised vs.unsupervised): -Connectionist models are able to change output weights and activation thresholds to simulate learning -Changing weights reflects memory/experience - can reorganize the response to error or damage. Unsupervised learning: Hebbian learning → neurons that wire together fire together (When neuron A triggers neuron B to fire, neuron B is more likely to fire when neuron A fires again). Competitive networks → output units compete to fire the most, and the winning unit has its weights increased. This reduces redundancy because each output unit specializes in a specific type of input. Supervised learning: Delta rule (perceptron convergence) → distinct from Hebbian learning in that training depends on the discrepancy between actual output and intended output. The weights are changed based on the amount of error

Characteristics that distinguish connectionist models from symbolic AI (continued)

3. Graceful degradation and plasticity: -Graceful degradation-still might retain some function after deleting some weights and units -If code in a PSSH breaks, the entire machine stops working (not graceful degradation) -If some part of the brain stops working, other parts of the brain can take over for the broken part (plasticity) so the brain continues to function -Strengths 1.Ability to learn 2.Not as brittle as a computer program 3.Parallel processing 4.Distributed representation across multiple units 4.Distributed representations: -Knowledge in brain is distributed across network ex) people with agnosia can mimic motor skills used for an object but they can't name the object, like mimicking turning a key pad on a lock -Knowledge is stored in the weights, distributed across the network

machine table

A representation of the set of instructions for a machine which details how it responds to different inputs

Watson

Argued the goal of behaviorism is completely shaped by prediction and and control of behavior. Introspection has NO part in behaviorism.

What do you think Brook's meant by "Elephant's don't play chess"?

Brook's title, "Elephant's don't play chess", is intended to highlight the failures of classical AI (symlbolic) AI and PSSH. He thinks that the focus of classical AI on problems like learning to play chess is misplaced. We have programs now that can beat the world's best chess players, yet we still do not have a good handle on basic intelligent behaviors shared across many different animal species. He believes our focus should be on modeling these more basic behaviors from a bottom-up perspective, rather than a top-down problem solving approach of classical AI.

What is the primary question that Turing asks in his 1950 article entitled, "Computing Machinery and Intelligence" published in the mind?

Can machines think?

algorithmic level

Explain how cognitive system performs (steps) -Traditional AI systems sit at this level -Most detail comes at this level

Brooks advocates for an approach to AI that is based on the physical symbol systems hypothesis

False

Neural network/connectionist models are a good example of what Fodor meant by a module.

False

What is the symbol grounding problem?

How do arbitrary symbols acquire meaning?

Difficulty with multiples of constraints not easily formulated as rules

If a robot was standing at an intersection (too many things are changing in their environment) I think this is kind of similar to the Frame problem as well

What is the 'imitation game' proposed by Alan Turing?

In the original version of the Imitation game, an interrogator interacts via two computer keyboards with two individuals (one man and one woman) by asking each questions. The 'game' for the interrogator is to determine which interactions are with the man and which are with the woman by the responses each provides to the interrogator's questions. The Turing test is a modification of the imitation game by replacing one of the people in the imitation game with an AI computer program. Turing's operational definition of what it means for a machine to think is the following: If the interrogator is unable to distinguish, via the question/answer game, between the AI computer program and the person, then the answer to the question can machines think is 'yes'.

problems with introspection

It's different for everyone so it is hard to assess its validity

Does Searle think in his article, "Can Computer's Think?" that digital computers can have minds?

No

Do behaviorists care about what's going on inside the head?

No, not at all- they reject mentalism -Don't like strong AI

Donders (1868)

Reaction time experiment Showed their are mental processes in the brain we can look at objectively through behavior

Implementation level

Specify physical realization for algorithm/how it is executed in the cognitive system; level of complete understanding (i.e. how does the system do the specific steps of the algorithm) -Behrmann's talk sits at this level -This approach buys into PSSH -Encounters frame problem -neurobiology only comes in at this level

computational level

Specify what the problem is (inputs/outputs)

Wilhelm Wundt

Theory of Structuralism-identifying set of basic sensations using method of analytic introspection

Analogous to a Turing machine is a machine table. The machine table specifies instructions for a scanner/printer. When in a given state, it can -Move one cell to the left or right -Erase what is in a cell -Write a new symbol in a cell -Enter a new state

True

Brook's in his article "Elephant's Don't Play Chess", proposes a physical grounding hypothesis.

True

Connectionist models are neurally-inspired models of information processing

True

Connectionist models have the ability to learn from experience

True

Fodor is the scholar who proposed the Language of Thought Hypothesis

True

Fodor proposed 'The Modularity of Mind' that the mind contains autonomous cognitive subsystems

True

Inter-theoretic reduction is a proposed general approach to solving the integration challenge in cognitive science

True

Newell and Simon in the 1976 article entitled, "Computer Science as Empirical Inquiry: Symbols and Search propose that PSS's are capable for intelligent action.

True

Noam Chomsky is an important figure in the history of cognitive science. (T/F)

True

Propositional attitudes are represented in the mind Belief & desire ⇒ intention to act

True

Turing proved decision problem was NOT solvable.

True

Neural Networks/connectionist approach states that simple rules can account for a large set of nodes (input>hidden units>outputs)

True. -Follows parallel processing rather than serial. Knowledge distributes across multiple networks, and does not rely on explicit rules. -Information processing at the algorithmic level so still needs to determine how networks are implemented into the brain

Frame problem

We can't pre-program every possible thing that can or cannot happen 1.When I move from one location to another, what else in the world around me do I change? 2.Difficulty in trying to program all possible actions and their consequences because changes in states are constant in the real world 3.Impossible to think of and program every single possibility 4.Robot and bomb example 5.Robot couldn't accurately forecast what would happen due to all it's interactions - kept blowing up -Problem for traditional AI based programs

Examples of problem solving as search

chess, missionaries, and cannibals problems: On one bank of a river there are three missionaries and three cannibals. There is one boat available that can hold up to 2 people to cross the river. If the cannibals ever outnumber the missionaries on either side, then the missionaries will get eaten. Symbolic representation of state as mcb m= missionaries on starting bank c= number of cannibals on starting back Number of boats on starting bank

Introspection

examination of one's own thoughts and feelings

language

idea of hierarchical, rather than serial organization (special case of Lashley's analysis of behavior)

latent learning

idea of information pick up and storage

place learning

idea of information specifically about the environment (rather than the organisms own movements)

Connectionist artificial neural networks

reactions to shortcomings of traditional AI approaches -Concerns about rule-based and serial models of simple cognitive abilities -Worries of biological plausibility of PSSH -Gap in neuroscientists tool kit → missing level of analysis

What was the systems reply to the Chinese room?

the CR argument misses the point, the real question is whether the system as a whole understands chinese, not whether the person in the room understands it

What was the robots reply to the Chinese room?

the CR does not understand chinese, but this is not because of any uncrossable gap between syntax (grammer)and semantics(meaning). Rather, it is because the Chinese room has no opportunity to interact with the environment and other people.

Formal properties (syntax)

the physical properties that can be manipulated within brains

Decision Problem

the problem of finding a way to decide whether a formula or class of formulas is true or provable within a given system of axioms.

Meaningful properties (semantics)

the properties that representations are intended to represent

Propositional logic

the proposition of the sentence is the fact being stated in the sentence (eg. "The cat is black"); therefore, propositional attitudes are how we feel towards propositions. These then create beliefs and desires that influence behavior.

Goal of behaviorism:

to explain all behavior in terms of conditioned responses -language as a set of CR-words are produced in response to a particular object or situation -Analyzing a sentence as a chain of elements , each serving as a CS for the succeeding element.

belief-desire psychology

we can often successfully predict behavior by considering what people believe about the world and considering their desires


Set pelajaran terkait

CS 109: Programming Definition, Software & Languages_Chapter_1_Lesson_11

View Set

Differentiation & Technology ED448S

View Set

Legal and ethical issues in counseling

View Set

Chapter 14: Fraud Against Organizations

View Set