Philosophy of Mind
Save FP! - functionalist arguments against elimination
1) FP is not an empirical theory (is not refutable by the facts). It is a normative theory: it doesn't describe how people actually act, but characterises an ideal: how they ought to act if they were to act rationally on the basis of beliefs and desires hence, FP could not be replaced by an description of what's going on at the neuronal level What do you think? Churchland: FP explanations depend on logical relations among beliefs and desires, like mathematics; does not make FP a normative theory. The relations are objective, we add valued to them People are not ideally rational 2) FP is an abstract theory FP characterises internal states such as beliefs and desires in terms of a network of relationships to sensory inputs, behavioral outputs, and other mental states This abstract network of relations could be realised in a variety of different kinds of physical systems Hence, we cannot eliminate this functional characterisation in favour of some physical one Churchland's rebuttal: This shifts the burden of proof From FP being a good theory to trying to certain physical systems supporting FP It's removed from empirical criticism To defend eliminativism - attack functionalism Churchland's attack: functionalism is like alchemy: Alchemy explained the properties of matter in terms of four different sorts of spirits e.g., the spirit of mercury explained the shininess of metals This theory got eliminated by atoms and elements Reduction not good enough: the old and the new theories gave different classifications of things Functionalist rebuttal: can redefine spirits as functional states e.g., being ensouled by mercury just is the disposition to reflect light Churchland: if you can make that move, you can make any move that will be an outrage against truth and reason. The functionalist stratagem can be used as a smokescreen for error . More worries with eliminationism: 1. Maybe not a theory at all? Alternative: social practice 2. Could we talk in 'neural' language? 3. Is there a contradiction? Reply to 3: it begs the question It assumes that for something to have any meaning, it must express a belief. It is this theory of meaning that should be rejected. Analogy: it would be like a 17th-century vitalist arguing that if someone denies they have a vital spirit, they must be dead, and hence not saying anything (Pat Churchland)
Like Turing machines
Analogy with Turing machines: Any creature with a mind can be regarded as a Turing machine, whose operation can be fully specified by a set of instructions (a "machine table" or program) operating on abstract symbols. A Turing machine is an idealised computing device consisting of a read/write head (or 'scanner') with a paper tape passing through it. The tape is divided into squares, each square bearing a single symbol--'0' or '1', for example. To machine functionalists the proper model for the mind and its operations is that of making a probability: the program specifies, for each state and set of inputs, the probability with which the machine will enter some subsequent state and produce some particular output. =Not just describe, but make predictions
Eliminative materialism/ Eliminativism - definition
"...is the thesis that our commonsense conception of psychological phenomena constitutes a radically false theory, a theory so fundamentally defective that both the principles and the ontology of that theory will eventually be displaced, rather than smoothly reduced, by completed neuroscience" (Churchland, p. 67). "... is the radical claim that our ordinary, common-sense understanding of the mind is deeply wrong and that some or all of the mental states posited by common-sense do not actually exist." (SEP)
Functionalism - definition
The mind is a system of mental states. The essence of the mental is not the kind of stuff it is made of Consciousness (Cartesian Dualism) Behaviour and dispositions (Rylean Behaviourism) Neural activity (Armstrong's Identity Theory) but the functional role it plays in the cognitive system of an individual. Functionalism (...) recognizes the possibility that systems as diverse as human beings, calculating machines and disembodied spirits could all have mental states. In the functionalist view the psychology of a system depends not on the stuff it is made of (living cells, mental or spiritual energy) but on how the stuff is put together" (Fodor 1981, p. 114). Role-Realizer distinction: Functionalism: metal states are functional states that play certain causal roles that are capable of multiple realization in a variety of different media. Role - a function something plays Realizer - that which fulfills the function, or brings it into reality Example: Clocks Clock - functional definition - "something that tells time" Multiple realizability: Grandfather clocks Analogue watches Digital watches Sundials Different materials: Metal Plastic Wood
The Hard Problem(phenomenal consciousness)
The problem of experience. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does. (Chalmers, 2006, Facing up to the problem of consciousness). In short: How are organisms subjects of experience? Why do we experience sensations as we do? Why and how does physical processing give rise to our rich inner life? The problem of consciousness, simply put, is that we cannot understand how a brain, qua gray, granular lump of biological matter, could be the seat of human consciousness, the source or ground of our rich and varied phenomenological lives. How could that 'lump' be conscious - or, conversely, how could I, as conscious being, be that lump?" (Akins 1993)
Artificial Intelligence (AI)
The project of getting computer machines to perform tasks that would be taken to demand human intelligence and judgment. Can computers think? 1: What intelligent tasks will any computer perform? 2: Given 1, does it do like like humans do? 3: Given 1 an 2: Does it show that it has psychology and mental states?
What is a Philosophical Zombie?
Computationally' identical to people: act like people, talk like people A-conscious but not P-conscious: dead inside, have no experience There is nothing it is like to be a zombie (Note: Many people would say that zombies have no consciousness) A 'zombie' is a creature that is physically and behaviourally indistinguishable from us,but that has no conscious experiences. Physicalism is the theory that mental states and processes (logically) supervene on physical states and processes I.e., any physical duplicate of me would also be a psychological duplicate Physicalism zombies are not conceivable But zombies are 'conceivable' So, Physicalism is false (also true of epiphenomenalism) The Zombie Argument against Functionalism physics and biology. Mental properties can't be logically derived from physical properties . "Since my twin is an exact physical duplicate of me, his inner psychological states will be functionally isomorphic with my own (assuming he is located in an identical environment). Whatever physical stimulus is applied, he will process the stimulus in the same way as I do, and produce exactly the same behavioral responses." "On the assumption that non-phenomenal psychological states are functional states (definable in terms of their role or function in mediating between stimuli and behavior), my zombie twin has just the same beliefs, thoughts, and desires as I do. He differs from me only with respect to experience. For him, there is nothing it is like to stare at the waves or to sip wine." Are Zombies Conceivable? The Modal Argument Modal Logic: Saul Kripke The Zombie argument depend only what is logically possible, not what is actually the case or what is necessarily the case. The modal argument: "No amount of physical information about another logically entails that he or she is conscious or feels anything at all." Modal intuition: zombies are imaginable The Zombie Argument: Conclusion Zombies are physically and functionally identical to us, but have no conscious experience or qualia If Zombies are conceivable, then it is possible to imagine a world in which all the physical and functional facts are the same, but there is no conscious experience. Then conscious experience is independent of the physical and functional facts, and cannot be explained in terms of them. So, experiential phenomena (e.g., qualia) are over and above physical phenomena Are zombies really conceivable? If I cannot rely on your physical make-up or behaviour, how can I tell that you are not a zombie?
Mary is seeing red
Mary lives in a black-and-white room. She is a scientist who knows everything there is to know about the science of colour (all the physics, chemistry, neurophysiology, causal and relational facts), but she has never experienced colour. Jackson asks: once she experiences colour, does she learn anything new? The Argument: What follows: To have complete physical knowledge is to have complete knowledge. KA: Mary does learn something new! Conclusion: There was some knowledge about colour Mary did not have prior to her release. Therefore, not all knowledge is physical knowledge, so physicalism is false. (If physicalism were true, the information about experience would already be in our possession. No imagining would be needed.) Implication: If Mary does learn something new, it shows that qualia exist: there is a property of the experience that was left out. The world is not just made of physical things. Back to dualism?
Is eliminativism a form of reductionism?
No! FP won't be reduced to neuroscience because it is wrong and will be replaced by neuroscience Reductionism: All mental states reduce to the physical/neurological phenomena. Eliminativism: There are no mental states (they do not exist), there are just neural states. Compare: identity theory - there will be a smooth reduction that preserves FP dualism - FP is irreducible, since it deals with a non-physical domain Functionalism - FP is irreducible, since psychology deals with an abstract set of relations among functional states that can be realized in different ways
The Computational Theory of Mind (CTM)
"The prevailing view in philosophy, psychology and artificial intelligence is one which emphasizes the analogies between the functioning of the human brain and the functioning of digital computers. According to the most extreme version of this view, the brain is just a digital computer and the mind is just a computer program" (Searle 1984, p. 28). Computationalism is a specific form of cognitivism that argues that mental activity is computational, that is, that the mind operates by performing purely formal operations on symbols, like a Turing machine. Compares mind to how computers function: storing and manipulating symbols. Mental states are defined by their causal roles; the causal roles are implemented by computational processes. Two key parts to CTM Representation: postulates inner semantic symbols which are mental representations with contents (e.g., LOT) Computation: Syntactic properties. Equates thinking with formal manipulation of inner symbols.
Arguments against Searle's Chinese room:
1. The Gestalt argument: whole is more than sum of its parts. The total system understands Chinese. Searle: If I (the central processing unit) cannot know what the symbols mean, then the whole system cannot either (p. 34). 2. Interaction argument: If the robot moved around and interacted in the world, it would start to understand Chinese. Searle: if the computer is all there is to the brain, same problem: no semantics without syntax. Searle: "As long as all I have is the symbol with no knowledge of its causes or how it got there, I have no way of knowing what it means. (...)The causal interactions (...) are irrelevant unless [they] are represented in some mind or other" (p. 35).
The Knowledge Argument
Argument against physicalism altogether Jackson's position: "I think that there are certain features of the bodily sensations especially, but also of certain perceptual experiences, which no amount of physical information includes. Tell me everything physical there is to tell about what is going on in a living brain, (...) and be I as clever as can be in fitting it all together, you won't have told me about the hurtfulness of pains, the itchiness of itches, pangs of jealousy, or about the characteristic experience of tasting a lemon, smelling a rose, hearing a loud noise or seeing the sky" (p. 127). Fred has better colour vision He can see a different shade of red. Not all ripe tomatoes look the same to him, though they look the same to us. He sees two colours: red 1 and red 2. They are as different to each other as yellow and blue. Jackson asks: what kind of experience does Fred have? Physical information will not tell us: His cones respond differently to certain light waves He has a wider range of brain waves responsible for discriminatory behaviour Knowing all this is not knowing everything about Fred
Thomas Nagel: The "What is it like?" Argument
Argument for insufficiency of reductionism: against limitations of our current concepts and theories for understanding human consciousness Reductionism = trying to reduce A to B (Reduction of the mental to the physical = Physicalism/Materialism) "Any reductionist program has to be based on an analysis of what is to be reduced. If the analysis leaves something out, the problem will be falsely posed - it is useless to pose the defense of materialism on any analysis of mental phenomena that fails to deal explicitly with their subjective character. For there is no reason to suppose that a reduction which seems plausible when no attempt is made to account for consciouness can be extended to include consciousness" (p. 323). Nagel: "fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism - something it is like for the organism" (p. 323). Subjective character of experience Example: Bat Bats perceive through a sonar: echolocation The sonar is a different 'sense' medium; no reason that it is subjectively like anything we can experience or imagine (p. 324). Starting point: Realism about experiences: there are experiential/phenomenological facts Phenomenological facts are both objective (what the quality of the experience is) and subjective (what the quality of experience is like from the point of view of the experiencing subject) Subjectivity of these facts is crucial aspect, or real nature, of the experience. As objective facts, they could be accessed by others As subjective facts, they can only be accessed by someone 'like me'/sufficiently similar to adopt that person's point of view (p. 325) If the subjective character of experience is fully comprehensible only from one point of view, then any shift to greater objectivity - that is, less attachment to a specific viewpoint - does not take us nearer to the real nature of the phenomenon, it takes us farther away from it." (p. 327). Scientific language - 3rd person, objective, birds-eye view - will then take us farther away from the experience. Therefore, reductionism fails. Conclusion: inadequacy of physicalist hypotheses. Does it mean physicalism is false? No: it follows that physicalism is a position we cannot understand (p. 328).
Identity Theory
Armstrong's definition of the mental: "a state of the person apt for producing certain ranges of behaviour" (1981) = the mental as the cause of the behaviour How does it fit with Physicalism? (= all mental states are states of the body) Science tells us that what causes behaviour are physical states of the body: central nervous system. What causes the behaviour is the mental So the CNS is identical to the mental
The Easy Problem(psychological consciousness)
Directly susceptible to the standard methods of cognitive science, whereby a phenomenon is explained in terms of computational or neural mechanisms. The easy problems of consciousness include those of explaining the following phenomena: the ability to discriminate, categorize, and react to environmental stimuli; the integration of information by a cognitive system; the reportability of mental states; the ability of a system to access its own internal states; the focus of attention; the deliberate control of behaviour; the difference between wakefulness and sleep. There is no real issue about whether these phenomena can be explained scientifically.
Folk Psychology
FP = 'naïve psychology' = commonsense psychology = theory of mind Weak: refers to particular set of cognitive capacities that allow for prediction and explanation of behaviour Having beliefs and desires Strong: theory of behaviour represented in the brain Empirical theory (Fodor) Why FP should be regarded as a theory Churchland argues that if we regard folk psychology as theory, we can unify the most important problems in the philosophy of mind, including: 1. the explanation and prediction of behaviour 2. the meanings of our terms for mental states 3. the problem of other minds 4. introspection 5. the intentionality of mental states 6. the mind-body problem 1. FP explains and predicts behavior: 1. All of us can explain and even predict the behavior of other people and even animals rather easily and successfully 2. These explanations and predictions attribute beliefs and desires to others 3. Also, explanations and predictions presuppose laws 4. Churchland believes that rough and ready common-sense laws can be reconstructed from everyday folk psychology explanations Each of us understands others, as well as we do, because we share a tacit command of an integrated body of lore concerning the law-like relations holding among external circumstances, internal states, and over behaviour" (p. 69). FP deals with the problem of the meanings of our terms for mental states : If folk psychology is a theory, the semantics of our terms is understood in the same way as the semantics of any other theoretical terms The meaning of a theoretical term derives from the network of laws in which it figures 3. FP as theory deals with the problem of other minds: 1. We don't infer that others have minds from their behavior (if shouting, then pain, or if broke leg and shouted before, then pain) 2. It's risky to generalise from our own case (target: Simulation Theory) 3. Rather, the belief that others have minds is an explanatory hypothesis that belongs to folk psychology. FP provides explanations and predictions (through a set of laws). 4. Introspective judgment - is just a special case of the theory 5. FP deals with intentionality : Intentionality of mental states is not a mysterious feature of nature but rather a "structural feature" of the concepts of folk psychology These structural features reveal how much FP is like theories in the physical sciences E.g., Propositional attitudes (has a belief that p or desire that p) are like "numerical attitudes:" has a mass n, a velocity n, etc., In both cases, the logical relations that hold among the "attitudes" are the same as those that hold among their contents, In both cases we can form laws by quantifying "for all n" or "for all p" The only difference between FP and theories in the physical sciences is in the sort of abstract entities it utilizes: propositions instead of numbers 6. FP sheds light on mind-body problem
Why is FP wrong?
FP is an empirical theory (can be true or false) and it happens to be false. Its ontology (beliefs, desires) are illusion. FP as a theory cannot explain many things:, such as: mental illness creative imagination differences in intelligence function of sleep how we track moving objects, 3-D vision from 2-D retinal array, perceptual illusions, memory and retrieval, learning, especially in pre- or non-linguistic organisms, such as babies and animals "Degenerating research program" It does not fit well with the rest of the sciences like evolutionary theory, biology, and neuroscience, which are part of a growing system of knowledge that includes chemistry and physics FP is not applicable to any sort of cognition other than that of adult, language-using human beings (excludes children and animals). It's tightly linked to our ability to use language
Dennett: Characteristics of Qualia
Four characteristics of Qualia: Ineffable: Can't describe them Intrinsic: Don't depend on anything else (Intrinsic property = a property that an object or a thing has of itself, independently of other things, including its context. Pertaining to its essence). Private: Known only from first-person point of view Immediately apprehensible: Known without judgment or reflection
View from Phenomenology
Four suppositions challenged by the Phenomenological Account: (1) Hidden minds: The problem of social cognition is due to the lack of access that we have to the other person's mental states. Since we cannot directly perceive the other's beliefs, desires, feelings, or intentions, we need some extra- perceptual cognitive process (mindreading or mentalizing by way of theoretical inferences or simulation routines) that will allow us to infer what they are. (2) Mindreading: These mentalizing processes constitute our primary, pervasive, or default way of understanding others. (3) Observational stance: Our normal everyday stance toward the other person is a third-person, observational stance. We observe their behaviors in order to explain and predict their actions, or to theorize or simulate their mental states. (4) Methodological individualism: understanding others depends primarily on cognitive capabilities or mechanisms located in an individual subject, or on processes that take place inside an individual brain.
Limits of CTM
How many aspects of mind can it account for? 1. Reasoning: keeping a 'rational relation' in sync with causal relation. But when does a mental process count as reasoning? Three types of theoretical reasoning: A) Deductive: conclusion is logically entailed in the premises (If Ps, then Q). If ravens are black, and Arch is a raven, Arch is black. B) Inductive: generalizing from observing a representative sample to an unobserved whole We infer that all ravens are black from seeing seeing many black ravens C) Abductive: inference to the best explanation Best explanation for certain cosmological facts about motions of stars: dark matter Best explanation for why the diamonds were found in John's safe and his fingerprints on them: he stole them. No computational process that can implement inductive and abductive reasoning 2. Emotions: greatly 'impair' rational thinking (Crawford 2010, p. 103). Either emotions are computational processes that CTM has left out, or they resist being captured by computation, and we need another explanation. 3. Imagination: Can computer be creative? (Manipulation of 0-1) Can creativity be understood computationally? Alternative model: connectionism? (Modeled on interconnected neural networks) 4. Mental representations (part of CTM): how do they get their meanings? According to CTM, to believe that x ('Turing cracked the enigma code'), is o have a mental symbol in your head that means, or has the content that Turing cracked the enigma code. But where does it get this meaning from? And how can the thought be directed at things that do not exist? = Problem of intentionality(the power of minds to be about, to represent, or to stand for, things, properties and states of affairs): What determines what we do is what our mental states are about, but aboutness is not a category of natural science.
Phenomenal vs. Access Consciousness (Ned Block)
Phenomenal (P-) Consciousness: Cannot define, can only point to it: Qualia, raw feels, 'What it is to be like', Whatever is experienced; e.g., sensations, feelings, perceptions, thoughts, wants, emotions Access (A-) Consciousness: All items of access consciousness are representational. A state is A-Conscious when its content is: informationally promiscuous (available to other parts of the brain for use in reasoning), poised for rational control of action, reportable; e.g., perception, sensation, etc. as information that can be used in modifying behaviour. P-Consciousness contains qualia/experience; A-Consciousness contains information that it can control Can A-Consciousness and P-consciousness come apart? Yes: Thought experiment of philosophical "zombies" (shortly) Yes: Real cases A-Consciousness without P-Consciousness Blindsight: Patients claim to be blind: they perceive no visual images, yet they respond successfully (unimpaired functioning) P-Consciousness without A-Consciousness Mental processing of background noise, e.g., noise of the pneumatic drill outside your window. You don't notice it until you become aware of the drill and realize that you have been hearing it for a long time. A-conscious without P-conscious: If you are A-conscious but not P-conscious, you can use information for rational thought, but you don't experience knowledge of this information. Does this differ from unconscious information processing? P-conscious without A-conscious: You are 'aware' but not consciously aware - is it a contradiction? Case: Sleepwalking Sleepwalkers have their eyes open and use vision to navigate the world. Visual information is poised for use in action. Sleepwalkers can eat, drink, even drive a car. But if you speak to them, they are slow or unresponsive and seemingly unaware of what they are doing. Are they A-conscious? P-Conscious? Is there anything it is like to sleepwalk?
Searle's main premises and arguments
Premises: P1. Brains cause minds P2. Syntax is not sufficient for semantics P3. Computer programs are entirely defined by their formal/syntactic structure P4. Minds have mental (semantic) contents Arguments: P2+P3+P4 = C1: Computer programs by themselves are not sufficient for minds. P1+C1 = C2: Brains cannot cause minds by running a computer program (brains are not computers) P1 C3: Anything else that caused minds would have to have causal powers at least equivalent to those of the brain (be as good as the brain) C1+C3 = C4: For any artefact we build which have mental states equivalent to human brain, computer program would not be sufficient (so it has to be like brain).
Putnam: Machine Functionalism
Putnam compared mental states to the functional or logical states of the computer. To be in a state 'M' is to be in some physiological state or other that plays role 'R' in the relevant computer program Computer programs mediate between the inputs and outputs The physiological state plays role 'R' in that it stands in a set of relations to physical inputs, outputs and other inner states that matches one-to-one the abstract input/output/logical state relations codified in the computer program
Epiphenomenalism
Qualia are epiphenomenal: secondary phenomena, by-products "They do nothing, they explain nothing, they serve merely to soothe the intuitions of dualists, and it is left a total mystery how they fit into the worldview of science" (p. 135). Two arguments: Causally inefficacious/impotent with respect to the physical world Evolutionarily useless Epiphenomenal = supervenience relation Supervenient: determined by/dependent on the properties they supervene upon Epiphenomenal: grounded on some underlying causal processes
Syntax vs. Semantics
Searle (1984, p. 31): "There is more to having a mind than having formal or syntactical processes". We need semantics, or mental content. Syntax: formal/grammatical structure; how we present information. Semantics: meaning; what is the information about. "Colourless green ideas sleep furiously"
Chinese Room experiment
Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that he or she is talking to another Chinese-speaking human being. The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?[6][c] Searle calls the first position "strong AI" and the latter "weak AI".[d] Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually. Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted as demonstrating intelligent conversation. However, Searle would not be able to understand the conversation. ("I don't speak a word of Chinese,"[9] he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either. Searle argues that without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and since it does not think, it does not have a "mind" in anything like the normal sense of the word. Therefore, he concludes that "strong AI" is false. The experiment aims to show that computers cannot process information or think like human beings. Human thoughts are about things, therefore, they have semantic contents. Computers only process syntax (formal, grammatical structures). Searle argued that syntax does not provide one with semantics for free, concluding that computers cannot think like humans do. The experiment is supposed to show that only Weak AI is plausible; that is, a machine running a program is at most only capable of simulating real human behavior and consciousness. Thus, computers can act 'as if' they were intelligent, but can never be truly intelligent in the same way as human beings are.
Externalism
Semantic externalism: after having been baptized, reality determines whether a word has been used correctly or not . (This is what we call 'cat') Externalism in the philosophy of mind: the content of thoughts is determined by the environment of the thinker. Concerns intending, desiring, believing The claim is that the character of such mental states does not supervene on the intrinsic properties of people What follows: Perfect duplicates as regards intrinsic properties could be in different mental states.
Strong AI
Strong thesis: computers can be programmed to think, and human minds are computers that have been programmed by 'biological hardwiring' and experience. Strong AI: correctly written program running on a machine actually is a mind "mind is to the brain as the program is to the computer hardware (...) On this view, any physical system whatever that had the right program with the right inputs and outputs would have a mind in exactly the same sense as you and I have minds" (Searle 1984, p. 28). Suggestion: Computers could be intelligent and have a conscious mind just as ours. They should be able to reason, solve puzzles, make judgments, plan, learn, communicate, etc.
(Chronic) Problems with Functionalism
We have still not answered: How it is that pain feels a certain way? (phenomenal character) Propositional attitudes represent certain states of affairs: beliefs and desires are about something, they have content. How can a purely physical entity or state have the property of being about something (that is not there at the time)?
Weak AI
machine running a program is at most only capable of simulating real human behavior and consciousness. Machines can act 'as if' they were intelligent.
Similarly, Pain
What is important is not that the c-fibres are firing, but that their firing contributes to the operation of the organism as a whole. To be in pain is to be in some state or other (of whatever biochemical description) that plays the same causal role as do the firings of c-fibres in the human beings. Input: pain is caused by bodily damage Output: pain causes behaviour to relieve from pain Mediation: sensory input causes the belief that one is in pain and the desire to get rid of pain (mental state causation) that causes the behavioural output P.s., One can be a functionalist and not be a materialist! Functionalists are usually materialists (they think that mental states are in fact based on material medium), but they don't have to be. Consider pain as an example: Identity Theorist: Science tells us that what realizes the pain-role in humans is firing C-fibers, so being in pain is just having firing C-fibers. Creatures without C-fibers can not be in that state. Functionalist: Pain is the state of having the pain-role played by some internal state or other. Having firing C-fibers is but one way to do this, but there could be another way. Creatures without C-fibers can also be in this role state. Putnam (1975): "the functional state hypothesis is not incompatible with dualism!" (p. 436).
