Philosophy of the Mind Exam 2
Systematicity
feature of human cognition whereby thoughts bear certain systematic relations to one another
Productivity
feature of human cognition whereby we are able to think an indefinitely large number of thoughts
Learning and Synaptic Plasticity
the brain learn by altering the "strength" (how much firing of presynaptic neuron influences firing of postsynaptic neuron) of connection at synapse; the stronger the synapse the greater the influence
Functionalism
the mind is as the mind does; if we can build machines out of some material that do the same things that minds can do, then we can build minds and consciousness
a priori identity statement
"The murderer of Jones is the murderer of Jones"; this is an obvious truth
a posteriori identity statement
"The murderer of Jones is the owner of the grocery store"; have to conduct an investigation to get this information
The Zombie Argument Against the Mind-Brain Identity Theory
1. If identity theory is true, then, since having a quale would be having a certain pattern of neural activation, it would be impossible to have that neural activation without having any qualia. 2. If something is conceivable, it is possible. 3. Zombies are conceivable. Further, the sort of zombie that is conceivable and relevant to identity theory is a zombie just like you with regard to its brain and neural properties. 4. Zombies are possible, that is, something could have all the same neural properties as you but lack qualia. So, it is possible to have any given neural sate without qualia, so identity theory is false.
Searle's Chinese Room Argument Against Strong AI
1. If strong AI is true, then any system or entity that runs the so-called Chinese-understanding program thereby understands Chinese. 2. Searle can run the so-called Chinese-understanding program without understanding Chinese. 3. There is at least one system or entity that runs the Chinese-understanding program without thereby understanding Chinese (this follows from 2). So, Strong AI is false (this follows from 1 and 2).
Two Main Features of Good Old Fashioned Artificial Intelligence (GOFAI)
1. Internal representations are "linguaform", i.e., they have a linguistic representational format/structure. 2. Intelligence, including not only thinking, but action and perception, depends on the performance of inference-like operations on internal representations.
Five Important Points about the Turing Machine
1. The machine table for any given Turing machine A can be coded on the tape of another Turing machine B. 2. Turing proved that there is a Turing machine now called the Universal Turing machine that can mimic any other Turing machine 3. The importance of all this is that Turing mechanizes the notion of computation. Prior to Turing, the notion of an algorithm lacked precise mathematical meaning 4. Turing Machines are multiply realizable: it doesn't matter how the instructions in a machine table are followed to map inputs onto outputs. 5. This had profound implications for how philosophers and psychologists thought about human thinking and reasoning. The Turing machine became a new model of the nature of thought.
Good Old Fashioned Artificial Intelligence (GOFAI)
GOFAI seeks to model mind as though it ran a computer program - a sequence of symbols and rules for manipulating them; perception, thought, and action depend on processing sentence-like representations in accordance with a finite set of rules
Computational Neuroscience
a computer models the functions of a brain
LOT
a finite set of symbols (like words) and rules for arranging them (like grammar/syntax); thought is systematic: you can't think one thought without also being able to think systematically related thoughts
Turing Test
a human investigator conducts many conversations via text interface with many participants, one of which is a machine and the rest are humans; if the investigator cannot tell who is human and who is not, the machine passes the test
Connectionism
artificial neural networks (ANNs) with large numbers of simplified neurons connected to each other via modifiable connection weights (like synapses)
Weak Artificial Intelligence (Weak AI)
artificially intelligent computers will never be anything more than simulations of intelligence
Turing Machine
at each stage of a computation, the machine scans one square on the tape and then performs an operation depending on the symbol written on the square and the internal state the machine is in at the time it scans the square.
Symbolicism
attempts to understand minds by thinking of them as computers
Advantages of the Mind-Brain Identity Theory
avoids dualist problems with mind-body interaction, epiphenomenalism, and inconsistency with science
Turing Machine
could prove that there is no general method for deciding, for any selected mathematical problem, whether it is provable; anything that can be computed can by computed by a universal this
Brain States
current patterns of neural activation in the brain (assemblies of neurons that are firing in certain ways)
Mind-Brain Identity Theory
denies substance dualism (because the mind is nonphysical) and property dualism (because it denies mental properties are nonphysical properties)
Localism
different cognitive functions located in specific parts of the brain
Mind-Brain Identity Theory
different than behaviorism because behaviorism says mental states in outward terms, while this is in inner terms
Functionalism
every token conscious mental state M is identical to another token brain state B, but what makes being in brain state B the same as being in mental state M is the functional role that brain state B plays in information processing in the brain
The Silicon Chip Replacement Thought Experiment
gradually having each neuron in the brain replaced by silicon microchips that perform the same functions as the neuron it replaces; the receive signals from and send signals to neighboring units
Strong Artificial Intelligence (Strong AI)
holds that a suitably complex computer program will really be intelligent, not just a simulation of intelligence
"The Robot Reply" to Searle's Chinese Room Argument
if the Chinese room were made to function as a "brain" in the body of a giant robot, then the running of the program would give rise to genuine understanding; in the disembodied version of the Chinese room, the symbols that are manipulated do not have any genuine meaning
Functionalism
implies multiple realizability: given the type of mental state can be realized by different types of physical structures so long as the play the right functional role; it is the guiding assumption of traditional research in artificial intelligence
Token
instance of a particular category, thing, kind; numerically identical
a posteriori
knowledge one can obtain only by having a sensory experience
a priori
knowledge one can obtain prior to having a sensory experience
Artifcial Neural Networks
learn via modification of weights to bring about correct input and output mapping; behave in "brain-like" ways; can generalize to input they've never seen after training
Functionalism
mental states are constituted solely by their working role; they are causal relations to other mental states, sensory inputs and behavioral outputs
The Multiple Realizability Argument Against the Mind-Brain Identity Theory
minds and mental states are more like drinking vessels than water (water is water, but a drinking vessel has many physical forms)
Hebb's Rule
neurons that fire together wire together
Turing Machine
operations are limited: moving one square to the left; moving one square to the right; erasing the scanned symbol; writing over the scanned symbol with a different symbol; and halting.
Turing Machine
refers to a theoretical machine that operates by reading and writing symbols on a long tape divided into squares
Brain States
relatively long lasting sets on weights on synaptic connections
Functionalism
strongly associated with Strong AI
"The Systems Reply" to Searle's Chinese Room Argument
the reason premise 2 is false is that Searle is not actually running the program all by himself; he is a mere part of a larger system that also includes the cards and rulebook and maybe the rest of the room; it is the larger system that is running the program; it is irrelevant that Searle does not understand Chinese as the whole system in which he is a part of does
Max Black's "Distinct Property" Argument Against the Mind-Brain Identity Theory
the relevant statements of mind-brain identity are a posteriori, but for an identity statement to be a posteriori, the two different referring expressions in the statement must be associated with distinct properties of the referent, which leads to property dualism, inconsistent with identity theory
Mind-Brain Identity Theory
the view that the mind is the brain and that mental states are brain states; the mind and the brain are identical
Holism
the whole brain plays some role in every cognitive function
Mind-Brain Identity Theory
this is able to draw strict divisions as physical systems lack brains and thereby lack minds
Type
tokens can belong to different sets of these; qualitatively identical
Searle's Chinese Room Argument
understanding the syntax (how to manipulate symbols according to the rules) but not the semantics (the meanings) of the actions; computers are in the same boat, they do what they do but they do not show any awareness of what they are doing