Chapters for history of psych test 1

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

adequate stimulation

Although Müller claimed that various nerves contain their own specific energy, he did not think that all the sense organs are equally sensitive to the same type of stimulation. Rather, each of the types of sense organs is maximally sensitive to a certain type of stimulation. Müller called this "specific irritability," and it was later referred to as adequate stimulation. The eye is most easily stimulated by light waves, the ear by sound waves, the skin by pressure, and so on. The eye can be stimulated by pressure, but pressure is a less adequate stimulus for vision than is a light wave. As we experience the environment, this differential sensitivity of the various senses provides an array of sensations. In this way, a "picture" of the physical environment is formed, but the nature of the picture—for example, how articulated it is—depends on the sensory systems that humans possess. For Müller, then, the correspondence between our sensations and objects in the physical world is determined by our senses and their specific irritability. Müller agonized over the question of whether the characteristics of the nerve itself or the place in the brain where the nerve terminates accounts for specificity. He concluded that the nerve was responsible, but subsequent research proved that brain location is the determinant.

helmholtz's contributions

Although Helmholtz was an empiricist in his explanations of sensation and perception, he did reflect the German Zeitgeist by postulating an active mind. According to Helmholtz, the mind's task was to create a reasonably accurate conception of reality from the various "signs" that it receives from the body's sensory systems. Helmholtz assumed that a dynamic relationship exists among volition, sensation, and reflection as the mind attempts to create a functional view of external reality. Helmholtz's view of the mind differed from that of most of the British empiricists and French sensationalists because they saw the mind as largely passive. For Helmholtz the mind's job was to construct a workable conception of reality given the incomplete and perhaps distorted information furnished by the senses (Turner, 1977). Although Helmholtz did postulate an active mind, he accepted the empirical explanation of the origins of the contents of that mind. In his explanations of sensation (the mental event that results from sensory stimulation) and perception (sensation plus unconscious inference), Helmholtz was emphatically empirical and unequivocally scientific. He showed that nerve transmission is not instantaneous, as had previously been believed, but that it is rather slow and reflects the operation of physical processes. More than anyone before him, Helmholtz showed with experimental rigor the mechanisms by which we do commerce with the physical world—mechanisms that could be explained in terms of objective, physical laws. Although he found that the match between what is physically present and what is experienced psychologically is not perfect, he could explain the discrepancy in terms of the properties of the receptor systems and the unconscious inferences of the observer. No mystical, unscientific forces were involved. Helmholtz's work brought physics, chemistry, physiology, and psychology closer together. In so doing, it paved the way for the emergence of experimental psychology, which was in many ways an inevitable step after Helmholtz's work. For an excellent discussion of Helmholtz's contributions to modern science and of the cultural climate in which they were made, see Cahan (1994). Helmholtz realized a lifelong ambition when he was appointed professor of physics at the University of Berlin in 1871. In 1882 the German emperor granted him noble status, and thereafter his name was Hermann von Helmholtz. In 1893 Helmholtz came to the United States to see the Chicago World's Fair and to visit with William James. On his way back to Germany, he fell aboard ship and suffered cuts and bruises but was apparently not badly injured. Following the accident, however, he complained of a general lack of energy. The next year he suffered a cerebral hemorrhage and died on September 8, 1894.

Pierre Gassendi

Pierre Gassendi (1592-1655), a contemporary of both Descartes and Hobbes, lived the quiet life of a studious priest and was respected as a mathematician and philosopher. Both Locke and Newton acknowledged a debt to Gassendi, whose major goal was to denounce Descartes's purely deductive (axiomatic) and dualistic philosophy and replace it with an observational (inductive) science based on physical monism. Gassendi offered several criticisms of Descartes's proposed mind-body dualism, the most telling of which was the observation that the mind, if unextended (immaterial), could have no knowledge of extended (material) things. Only physical things, he said, can influence and be influenced by physical things. He also could not understand why Descartes spent so much time proving that he existed when it was obvious, to Gassendi, that anything that moves exists. Descartes could have said, "I move, therefore I am." In fact, according to Gassendi, such a conclusion would have been a vast improvement over "I think, therefore I am." Continuing his attack on Descartes, Gassendi asked why could "lower" animals move themselves quite well without the aid of a mind and yet humans needed one? Why not, Gassendi asked, ascribe the operations attributed to the mind to the functions of the brain (which is physical)? In other words, Gassendi saw no reason for postulating an unextended (immaterial) mind to explain any human activity. Gassendi concluded that humans are nothing but matter and therefore could be studied and understood just as anything else in the universe could. Gassendi suggested a physical monism not unlike the one that the early Greek atomists, such as Democritus and later the Epicureans, had offered. In fact, Gassendi was especially fond of Epicurus, as well as later Epicurean philosophers, and he was responsible for reviving interest in them. For example, he accepted the Epicurean principle of long-term hedonism as the only reasonable guide for human conduct. For these reasons, Gassendi is often considered the founder of modern materialism, but that honor could as easily be given to Gassendi's contemporary Hobbes.

Bell- Magendie Law

Until the 19th century, two views prevailed about what nerves were. One was Descartes's view that a nerve consisted of fibers that connected sense receptors to the brain. These fibers were housed in hollow tubes that in turn transmitted the "animal spirits" from the brain to the muscles. The second was Hartley's view that nerves were the means by which "vibrations" were conducted from the sense receptors to the brain and from the brain to the muscles. In 1811 the great British physiologist Charles Bell (1774-1842) printed and distributed to his friends 100 copies of a pamphlet that summarized his groundbreaking research on the anatomical and functional discreteness of sensory and motor nerves. Operating on rabbits, Bell demonstrated that sensory nerves enter the posterior (dorsal) roots of the spinal cord and the motor nerves emerge from the anterior (ventral) roots. Bell's discovery separated nerve physiology into the study of sensory and motor functions—that is, into a study of sensation and movement. Bell's finding was significant because it demonstrated that specific mental functions are mediated by different anatomical structures. That there are sensory and motor nerves is actually an idea articulated from empirical observations as far back as Eristratus of Alexandria (ca. 300 B.C.) and reinforced by Galen's study of gladiators and soldiers in the second century A.D. It was Bell, however, who provided the scientists of his day with clear-cut experimental evidence. As mentioned, Bell circulated his findings only among his friends. This explains why the prominent French physiologist François Magendie (1783-1855) could publish similar results 11 years later without being aware of Bell's findings. A heated debate arose among Bell's and Magendie's followers about the priority of the discovery. History has settled the issue by referring to the discovery as the Bell-Magendie law (for more details on the Bell-versus-Magendie controversy, see Cranefield, 1974). Charles Bell Images Library of Congress Prints and Photographs Division [LC-USZ62-51094] After Bell and Magendie, it was no longer possible to think of nerves as general conveyers of vibrations or spirits. Now a "law of forward direction" governed the nervous system. Sensory nerves carried impulses forward from the sense receptors to the brain, and motor nerves carried impulses forward from the brain to the muscles and glands. The Bell-Magendie law demonstrated separate sensory and motor tracts in the spinal cord and suggested separate sensory and motor regions in the brain.

John Locke

death of Francis Bacon. His father was a Puritan, a landowner, and an attorney. Locke was a 17-year-old student at Westminster School when, on January 30, 1649, King Charles I was executed as a traitor to his country. The execution, which Locke may have witnessed, took place in the courtyard of Whitehall Palace, which was close to Locke's school. Locke was born 10 years before the outbreak of civil war, and he lived through this great rebellion that was so important to English history. It was at least partially due to the Zeitgeist then that Locke, as well as several of his fellow students, developed a lifelong interest in politics. Indeed, Locke was to become the most influential political philosopher in post-Renaissance Europe. In 1652 Locke, at age 20, obtained a scholarship from Oxford University, where he earned his bachelor's degree in 1656 and his master's degree in 1658. His first publication was a poem that he wrote as an undergraduate—a tribute to Oliver Cromwell. Locke remained at Oxford for 30 years, having academic appointments in Greek, rhetoric, and moral philosophy. He also studied medicine, and on his third attempt, he finally attained his doctorate in medicine in 1674. It was through his medical studies that Locke met Robert Boyle (1627-1691), who influenced him greatly. Boyle was one of the founders of the Royal Society and of modern chemistry. Locke became Boyle's friend, student, and research assistant. From Boyle, Locke learned that physical objects were composed of "minute corpuscles" that have just a few intrinsic qualities. These corpuscles can be experienced in many different arrangements. Some arrangements result in the experience of primary qualities and some in the experience of secondary qualities. We will see shortly that Boyle's "corpuscular hypothesis" strongly influenced Locke's philosophy. Locke became a member of the Royal Society, and as a member performed some studies and demonstrations in chemistry and meteorology. Newton was only 10 years old when Locke arrived at Oxford, but in 1689 the two men met and Locke referred to him as the "incomparable Mr. Newton." Locke corresponded with Newton for the rest of his life, primarily on theological matters. John Locke Images Kneller, G., Sir, artist./ Images from the History of Medicine (NLM) /U.S.National Library of Medicine Among Locke's lesser known works were his editing of Boyle's General History of the Air, an edition of Aesop's Fables designed to help children learn Latin, and a book on money and interest rates (Gregory, 1987). His most famous work, however, and the one most important to psychology was An Essay Concerning Human Understanding (1690). Locke worked on the Essay for 17 years, and it was finally published when Locke was almost 60 years old. After its original publication, Locke revised the Essay several times, and it eventually went into five editions. The fifth edition appeared posthumously in 1706, and it is on this final edition that most of what follows is based. After publishing the Essay, Locke wrote prolifically on such topics as education, government, economics, and Christianity. Voltaire (1694-1778) greatly admired Locke and did much to create a positive impression of Locke on the continent, especially in France. Although Hobbes was clearly an empiricist, it was Locke who shaped most of subsequent British empiricism. For example, most of the British empiricists followed Locke in accepting a mind-body dualism; that is, they rejected Hobbes's physical monism (materialism). Whereas Hobbes equated mental images with the motions in the brain that were caused by external motions acting on the sense receptors, Locke was content to say that somehow sensory stimulation caused ideas. Early in the Essay, Locke washed his hands of the question as to how something physical could cause something mental—it just did. Opposition to Innate Ideas Locke's Essay was, in part, a protest against Descartes's philosophy. It was not Descartes's dualism that Locke attacked but his notion of innate ideas. Despite Hobbes's efforts, the notion of innate ideas was still very popular in Locke's time. Especially influential was the belief that God had instilled in humans innate ideas of morality. Locke observed that if the mind contained such innate ideas, then all humans should have those same ideas, and clearly they do not. Humans, he said, are not born with any innate ideas, whether they be moral, logical, or mathematical. Where, then, do all the ideas that humans have come from? Locke's (1706/1974) famous answer was as follows: Let us then suppose the mind to be, as we say, white paper, void of all characters, without any ideas; how comes it to be furnished? Whence comes it by that vast store which the busy and boundless fancy of man has painted on it with an almost endless variety? Whence has it all the materials of reason and knowledge? To this I answer, in one word, from experience. In that all our knowledge is founded, and from that it ultimately derives itself. Our observation employed either about external sensible objects, or about the internal operations of our minds perceived and reflected on by ourselves, is that which supplies our understandings with all the materials of thinking. These two are the fountains of knowledge, from whence all the ideas we have, or can naturally have, do spring. (pp. 89-90) Sensation and Reflection For Locke, an idea was simply a mental image that could be employed while thinking: "Whatsoever the mind perceives in itself, or is the immediate object of perception, thought, or understanding, that I call idea" (1706/1974, pp. 111-112). For Locke, all ideas come from either sensation or reflection. That is, ideas result either by direct sensory stimulation or by reflection on the remnants of prior sensory stimulation. Thus, the source of all ideas is sensation, but the ideas obtained by sensation can be acted on and rearranged by the operations of the mind, thereby giving rise to new ideas. The operations the mind can bring to bear on the ideas furnished by sensation include "perception, thinking, doubting, believing, reasoning, knowing, and willing" (Locke, 1706/1974, p. 90). Locke is often said to have postulated a passive mind that simply received and stored ideas caused by sensory stimulation. This was true, however, only of sensations. Once the ideas furnished by sensation are in the mind, they can be actively transformed by the mental operations involved in reflection. It is important to note, however, Locke's insistence that all knowledge is ultimately derived from sensory experience. Although the contents of the mind are derived from sensory stimulation, the operations of the mind are part of human nature; they are innate. As an empiricist, Locke opposed the notion of specific innate ideas but not innate operations (faculties) of the mind. Ideas and Emotions Simple ideas, whether from sensation or reflection, constitute the atoms (corpuscles) of experience because they cannot be divided or analyzed further into other ideas. Complex ideas, however, are composites of simple ideas and therefore can be analyzed into their component parts (simple ideas). When the operations of the mind are applied to simple ideas through reflection, complex ideas are formed. That is, through such operations as comparing, remembering, discriminating, combining and enlarging, abstracting, and reasoning, simple ideas are combined into complex ones. As Locke (1706/1974) explained, Simple ideas, the materials of all our knowledge, are suggested and furnished to the mind only by ... sensation and reflection. When the understanding is once stored with these simple ideas, it has the power to repeat, compare, and unite them, even to an almost infinite variety, and so can make at pleasure new complex ideas. But it is not in the power of the most exalted wit or enlarged understanding, by any quickness or variety of thought, to invent or frame one new simple idea in the mind, not taken in by the ways before mentioned: nor can any force of the understanding destroy those that are there. I would have anyone try to fancy any taste which had never affected his palate, or frame the idea of a scent he had never smelt: and when he can do this, I will also conclude that a blind man hath ideas of colours, and a deaf man true distinct notions of sounds. (pp. 99-100) The mind, then, can neither create nor destroy ideas, but it can arrange existing ideas in an almost infinite number of configurations. Locke also maintained that the feelings of pleasure or pain accompany ideas. He believed that the other passions (emotions)—like love, desire, joy, hatred, sorrow, anger, fear, despair, envy, shame, and hope—were all derived from the two basic feelings of pleasure and pain. Things that cause pleasure are good, and things that cause pain are evil. For Locke, the "greatest good" was the freedom to think pleasurable thoughts. Like Hobbes, his theory of human motivation was hedonistic because it maintained that humans are motivated by the search for pleasure and the avoidance of pain. For Locke then, the information that the senses provided was the stuff the mind thought about and had emotional reactions toward. Primary and Secondary Qualities The distinction between primary and secondary qualities is the distinction that several early Greeks, and later Galileo, made between what is physically present and what is experienced psychologically. However, it was Locke's friend and teacher Robert Boyle who introduced the terms primary qualities and secondary qualities, and Locke borrowed the terms from him (Locke, 1706/1974). Unfortunately, primary and secondary qualities have been defined in two distinctively different ways through the centuries. One way has been to define primary qualities as attributes of physical reality and secondary qualities as attributes of subjective or psychological reality. That is, primary qualities refer to actual attributes of physical objects or events, but secondary qualities refer to psychological experiences that have no counterparts in the physical world. We covered this approach in our discussion of Galileo in Chapter 4, but Boyle and Locke took a different path. For them, both primary and secondary qualities referred to characteristics of the physical world; what distinguished them was the type of psychological experience they caused. Following Boyle, Locke referred to any aspect of a physical object that had the power to produce an idea as a quality. Primary qualities have the power to create in us ideas that correspond to actual attributes of physical objects—for example, the ideas of solidity, extension, shape, motion or rest, and quantity. With primary qualities, there is a match between what is physically present and what is experienced psychologically. The secondary qualities of objects also have the power to produce ideas, but the ideas they produce do not correspond to anything in the physical world. The ideas produced by secondary qualities include those of color, sound, temperature, and taste. Locke's paradox of the basins dramatically demonstrated the nature of ideas caused by secondary qualities. Suppose we ask, is temperature a characteristic of the physical world? In other words, is it not safe to assume that objects in the physical world are hot or cold or somewhere in between? Looked at in this way, temperature would be a primary quality. Locke beckoned his readers to take three water basins: one containing cold water (basin A), one containing hot water (basin B), and the other containing warm water (basin C). If a person places one hand in basin A and the other in basin B, one hand will feel hot and the other cold, supporting the contention that hot and cold are properties of the water (that is, that temperature is a primary quality). Next, Locke instructed the reader to place both hands in basin C, which contains the warm water. To the hand that was previously in basin A (cold water), the water in basin C will feel hot; to the hand that was previously in basin B (hot water), the water will feel cold, even though the temperature of the water in basin C is physically the same for both hands. Thus, Locke demonstrated that the experience of hot and cold depended on the experiencing person, and temperature therefore reflected secondary qualities. For Locke, the important point was that some of our psychological experiences reflected the physical world as it actually was (those experiences caused by primary qualities) and some did not (those experiences caused by secondary qualities). He did not say, as Galileo had, that subjective reality was inferior to physical reality. For Locke, subjective reality could be studied as objectively as physical reality, and he set out to do just that. Association of Ideas Associationism is "a psychological theory which takes association to be the fundamental principle of mental life, in terms of which even the higher thought processes are to be explained" (Drever, 1968, p. 11). According to this definition, it is possible to reject associationism and still accept the fact that associative learning does occur. Such was the case with Locke. In fact, Locke's discussion of association came as an afterthought, and a short chapter titled "Association of Ideas" did not appear until the fourth edition of Essay. Even then, association was used primarily to explain errors in reasoning. As we have seen, Locke believed that most knowledge is attained by actively reflecting on the ideas in the mind. By comparing, combining, relating, and otherwise thinking about ideas, we attain our understanding of the world, morality, and ourselves. Where, then, does association enter into Locke's deliberations? Locke used association to explain the faulty beliefs that can result from accidents of time or circumstance. Locke believed that ideas that succeeded each other because of natural or rational reasons (such as when the odor of bread baking causes one to have the idea of bread) represented true knowledge but that ideas that became associated fortuitously, because of their contiguity, could result in unreasonable beliefs. As examples of unreasonable beliefs, Locke included the following: A person who eats too much honey becomes sick and thereafter avoids even the thought of honey (today we call the subsequent avoidance of substances that cause illness the Garcia Effect—in honor of Hispanic American psychologist John Garcia, who received APA's award for Distinguished Scientific Contribution in 1979 for his research on such phenomena); a child whose maid associates darkness with evil spirits and goblins will grow up with a fear of darkness; a person undergoing painful surgery will develop an aversion to the surgeon; and children who are taught reading by harsh corrective methods will develop a lifelong disdain of reading. Following Drever's (1968) definition of associationism as an attempt to reduce all mental activity to associative principles, Locke's philosophy certainly did not exemplify associationism. Although his short chapter on the association of ideas did mention the learning of natural associations, he focuses on the learning of those that are "unnatural." As we shall see, for the British empiricists and French sensationalists who followed Locke, the laws of association took on a greater significance. In their efforts to become "Newtons of the mind," they argued that ideas corresponded to Boyle's corpuscles and that the laws of association provided the gravity that held ideas together. Education Locke's book Some Thoughts Concerning Education (1693/2000) had a profound and longlasting influence on education throughout the Western world. By insisting that nurture (experience) was much more important than nature (innate ability) for character development, his views on education were in accordance with his empirical philosophy. For Locke, important education took place both at home and at school. He encouraged parents to increase stress tolerance in their children (a process he called hardening) by having them sleep on hard rather than soft beds. Exposing children to moderate amounts of coldness and wetness would also increase tolerance for the inevitable hardships of life. Crying should be discouraged with physical punishment, if necessary. Parents should provide their children with sufficient sleep, food, fresh air, and exercise because good health and effective learning are inseparable. Concerning classroom practices, mild physical punishment of students was advocated but severe physical punishment was not. Teachers, Locke believed, should always make the learning experience as pleasant as possible so that learning beyond school will be sought. If learning occurs under aversive conditions, it will be avoided both in school and beyond. A step-by-step approach to teaching complex topics was recommended to avoid overwhelming and thus frustrating students. For the same reason, excessive and overly rigorous assignments should be avoided. The primary job of the teacher should be to recognize and praise student accomplishments. How does one deal with a child's irrational fears? Locke used a child with a fear of frogs to exemplify his technique: Your child shrieks, and runs away at the sight of a Frog; Let another catch it, and lay it down at a good distance from him: At first accustom him to look upon it; When he can do that, then come nearer to it, and see it leap without Emotion; then to touch it lightly when it is held fast in another's hand; and so on, till he can come to handle it as confidently as a Butterfly, or a Sparrow. By the same way any other vain Terrors may be remov'd; if Care be taken, that you go not too fast, and push not the Child on to a new degree of assurance, till he be thoroughly confirm'd in the former. (Locke, 1693/2000, pp. 177-178) The advice given by Locke for dealing with irrational fears was remarkably similar to the kind of behavioral therapy employed many years later by Mary Cover Jones (see Chapter 12). With the exception of teaching stress tolerance, Locke's ideas concerning education now appear rather routine. They were, however, anything but routine when he first proposed them. Government by the People and for the People Although for us he is a patriarch of British empiricism, Locke likely saw himself as a political philosopher. Locke attacked not only the notion of innate ideas but also the notion of innate moral principles. He believed that much dogma was built on the assumption of one innate moral truth or another and that people should seek the truth for themselves rather than having it imposed on them. For this and other reasons, empiricism was considered to be a radical movement that sought to replace religion based on revelation with natural law. Influential politically, Locke challenged the divine right of kings and proposed a government by and for the people. His political writings on liberty and the social contract were read enthusiastically, and his ideas were influential in the drafting of the U.S. Declaration of Independence.

Étienne Bonnot De Condillac

Étienne Bonnot de Condillac (1714-1780) was born into an aristocratic family at Grenoble. He was a contemporary of Hume, Rousseau, and Voltaire. He was educated at a Jesuit seminary in Paris, but shortly after his ordination as a Roman Catholic priest, he began frequenting the literary and philosophical salons of Paris and gradually lost interest in his religious career. In fact, he became an outspoken critic of religious dogma. Condillac extended Locke's Essay into French philosophy, and the title of his first book indicates a deep appreciation for Locke's empiricism: Essay on the Origin of Human Knowledge: A Supplement to Mr. Locke's Essay on the Human Understanding (1746). Eight years later, in his Treatise on the Sensations (1754), Condillac suggested that Locke had unnecessarily attributed too many innate powers to the mind. Condillac was convinced that all powers Locke attributed to the mind could be derived simply from the abilities to sense, to remember, and to experience pleasure and pain. The Sentient Statue To make his point, Condillac (1754/1930) asked his readers to imagine a marble statue that can perceive, remember, and feel but has only the sense of smell. The mental life of the statue consists only of odors; beyond that, it cannot have any conception of any other things external to itself nor can it have sensations of color, sound, or taste. The statue does have the capacity for attention because it will attend to whatever odor it experiences. With attention comes feeling because attending to a pleasant odor causes enjoyment and attending to an unpleasant odor causes an unpleasant feeling. If the statue had just one continuous pleasant or unpleasant experience, it could not experience desire because it would have nothing with which to compare the experience. If, however, a pleasant sensation ended, remembering it, the statue could desire it to return. Likewise, if an unpleasant experience ended, remembering it, the statue could desire that it to not return. For Condillac then, all desire is based on the experiences of pleasure and pain. The statue loves pleasant experiences and hates unpleasant ones. The statue, given the ability to remember, can not only experience current odors but also remember ones previously experienced. Typically, the former provide a more vivid sensation than the latter. When the statue smells a rose at one time and a carnation at another, it has the basis for comparison. The comparison can be made by currently smelling one and remembering the other or by remembering both odors. With the ability to compare comes the ability to be surprised. Surprise is experienced whenever an experience the statue has departs radically from those it is used to: "It cannot fail to notice the change when it passes suddenly from a state to which it is accustomed to a quite different state, of which it has as yet no idea" (Condillac, 1754/1930, p. 10). Also with the ability to compare comes the ability to judge. As with remembering in general, the more comparisons and judgments the statue makes, the easier making them becomes. Sensations are remembered in the order in which they occur; memories then form a chain. This fact allows the statue to recall distant memories by passing from one idea to another until the most distant idea is recalled. According to Condillac, without first recalling intermediary ideas, distant memories would be lost. If the statue remembers sensations in the order they occurred, the process is called retrieval. If they are recalled in a different order, it is called imagination. Dreaming is a form of imagination. Retrieving or imagining which is hated causes fear. Retrieving or imagining what is loved causes hope. The statue, having had several sensations, can now notice that they can be grouped in various ways, such as intense, weak, pleasant, and unpleasant. When sensations or memories are grouped in terms of what they have in common, the statue has formed abstract ideas, for example, pleasantness. Also by noting that some sensations or memories last longer than others, the statue develops the idea of duration. When our statue has accumulated a vast number of memories, it will tend to dwell more on the pleasant ones than on the unpleasant. In fact, according to Condillac, it is toward the seeking of pleasure or the avoidance of pain that the statue's mental abilities are ultimately aimed: "Thus it is that pleasure and pain will always determine the actions of [the statue's] faculties" (Condillac, 1754/1930, p. 14). The statue's self, ego, or personality consists of its sensations, its memories, and its other mental abilities. With its memories, it is capable of desiring sensations other than the one it is now having; or by remembering other sensations, it can wish its present sensation to continue or terminate. Experiences (in this case, odors) never experienced cannot become part of the statue's mental life, which consists only of its sensations and its memories of sensations. Clearly, Condillac was not writing about statues but was discussing how human mental abilities could be derived from sensations, memories, and a few basic feelings. Humans, of course, have more than one sense modality; that fact makes humans much more complicated than the statue, but the principle is the same. There was no need therefore for Locke and others to postulate a number of innate powers of the mind. According to Condillac (1754/1930), the powers of the mind develop as a natural consequence of experienced sensation: If we bear in mind that recollecting, comparing, judging, discerning, imagining, wondering, having abstract ideas, and ideas of number and duration, knowing general and particular truths, are only different modes of attention; that having passions, loving, hating, hoping, fearing, wishing, are only different modes of desire; and finally that attention and desire have their origin in feeling alone; we shall conclude that sensation contains within it all the faculties of the soul. (p. 45) In his analysis of language, Condillac (1746/2001) argued that the meaning of words is determined exclusively by how they are habitually used: To understand how mankind came to agreement among themselves about the signification of words they wished to put into use, it is sufficient to observe that they pronounced them in circumstances in which everyone was obliged to refer to the same perceptions. By that means they fixed the meaning with greater exactness in proportion as the circumstances, by frequent repetition, habituated the mind to connect particular ideas to particular signs. The language of action removed the ambiguities and double meanings which in the beginning would occur very often. (p. 156) There is considerable similarity between Condillac's analysis of language and Wittgenstein's later analysis, which we will discuss in Chapter 20. Claude-Adrien Helvétius and Others Claude-Adrien Helvétius (1715-1771) was born in Paris and educated by Jesuits. He became wealthy as a tax collector, married an attractive countess, and retired to the countryside where he wrote and socialized with some of Europe's finest minds. In 1758 he published Essays on the Mind, which was condemned by the Sorbonne and burned. His posthumous A Treatise on Man: His Intellectual Faculties and His Education (1772) moved Jeremy Bentham to claim that what Francis Bacon had done for our understanding of the physical world, Helvétius had done for our understanding of the moral world. Also, James Mill claimed to have used Helvétius's philosophy as a guide in the education of his son, John Stuart. Helvétius did not contradict any of the major tenets of British empiricism, nor did he add any new ones. Rather, he explored in depth the implication of the contention that the contents of the mind come only from experience. Specifically, control experience and you control the contents of the mind. The implications of this maxim for education and even the structure of society were clear, and in the hands of Helvétius, empiricism became radical environmentalism. All manner of social skills, moral behavior, and even genius could be taught through the control of experiences (education). B. Russell (1945) said of Helvétius, "His doctrine is optimistic, since only a perfect education is needed to make men perfect. There is a suggestion that it would be easy to find a perfect education if the priests were got out of the way" (p. 722). Because Helvétius too was a hedonist, education in general terms could be viewed as the manipulation of pleasurable and painful experiences. Today we might state this as reinforcing desirable thoughts and behavior and either ignoring or punishing undesirable thoughts and behavior. In this sense, Helvétius's position has much in common with that of the modern behaviorists and the types of social engineering sometimes associated with them. Beyond Helvétius there were other French sensationalists that deserve at least a mention. For example, the disturbing works of the Marquis de Sade (1740-1814) illustrate hedonism as a natural philosophy. More canonically, Francois-Pierre Maine de Biran's (1766-1824) initial writings expanded on Locke's philosophy and added a careful consideration of habit formation (learning). Antoine Destutt de Tracy (1754—1836) juxtaposed advances in physiology with empirical philosophy and advocated for social and educational reform grounded in such mechanistic ideas. He was especially influential for American president Thomas Jefferson. Pierre Jean Georges Cabinas (1757-1808) was a physician during the French Revolution. Interested in the relationship between mind and body, he studied those executed by the guillotine to show that no trace of consciousness endured beheading. For Cabinas the brain was an organ analogous to the stomach; its role was to digest sensory information—mental activities were then much like digestive activities—the result of organ functioning.

George Berkeley

George Berkeley (1685-1753) was born in Kilkenny, Ireland. He first attended Kilkenny College; then in 1700 at the age of 15, he entered Trinity College (University of Dublin), where he earned his bachelor's degree in 1704 at the age of 19 and his master's degree in 1707 at the age of 22. He received ordination as a deacon of the Anglican church at the age of 24 and that same year published An Essay Towards a New Theory of Vision (1709). A year later he published what was perhaps his most important work, A Treatise Concerning the Principles of Human Knowledge (1710). His third major work, Three Dialogues Between Hylas and Philonous, was published during his first trip to England in 1713. Berkeley's fame was firmly established by these three books before he was 30 years old. He continued at Trinity College and lectured in divinity and Greek philosophy until 1724, when he became involved in the founding of a new college in Bermuda intended for both native and white colonial Americans. In 1728 he sailed to Newport, Rhode Island, where he waited for funding for his project. The hoped-for government grants were not forthcoming, however, and Berkeley returned to London. Berkeley's home in Whitehall (near Newport) still stands as a museum containing artifacts of his visit to colonial America. For the last 18 years of his life, Berkeley was an Anglican bishop of Cloyne in County Cork, Ireland. He died suddenly on January 14, 1753, at Oxford, where he had been helping his son enroll as an undergraduate. Just over a hundred years later, the site of the first University of California campus was named for Bishop Berkeley. Berkeley observed that the downfall of Scholasticism, caused by attacks on Aristotle's philosophy, had resulted in widespread religious skepticism, if not actual atheism. He also noted that the new philosophy of materialism was further deteriorating the foundations of religious belief. The worldview created by materialistic philosophy, Berkeley felt, was that all matter is atomic or corpuscular in nature and that all physical events could be explained in terms of mechanical laws. The world becomes nothing but matter in motion, and the motion of moving objects is explained by natural laws, which are expressible in mathematical terms. Berkeley correctly perceived that materialistic philosophy was pushing God farther and farther out of the picture, and thus it was dangerous, if not potentially fatal, to both religion and morality. Berkeley therefore decided to attack materialism at its very foundation—its assumption that matter exists. George Berkeley Images Bettmann/Getty Images "To Be Is to Be Perceived." Berkeley's solution to the problem was bold and sweeping; he attempted to demonstrate that matter does not exist and that all claims made by materialistic philosophy must therefore be false. In Berkeley's denial of matter, he both agreed and disagreed with Locke. He agreed with Locke that human knowledge is based only on ideas. However, Berkeley strongly disagreed with Locke's contention that all ideas are derived from interactions with the empirical world. Even if there were such a world, Berkeley said, we could never know it directly. Reality consists of our perceptions and nothing more. In his discussion of primary and secondary qualities, Berkeley referred to the former as the supposed attributes of physical things and to the latter as ideas or perceptions. Having made this distinction, he then rejected the existence of primary qualities. For him, only secondary qualities (perceptions) exist. This follows from his contention that "to be is to be perceived." Of course, Berkeley's contention that everything that exists is a perception raises several questions. For example, if reality is only a matter of perception, does reality cease to exist when one is not perceiving it? And, on what basis can it be assumed that the reality one person perceives is the same reality that others perceive? Still, we must realize that Berkeley did not deny the existence of external reality. What he did deny was that external reality consisted of inert matter, as the materialists maintained: I do not argue against the existence of any one thing that we can apprehend, either by sense or reflection. That the things I see with my eyes and touch with my hands do exist, really exist, I make not the least question. The only thing whose existence we deny is that which philosophers call Matter or corporeal substance. (Armstrong, 1965, p. 74) What creates external reality is God's perception. It is the fact that external reality is God's perception that makes it stable over time and the same for everyone. The so-called laws of nature are ideas in God's mind. On rare occasions, God may change his mind and thus vary the "laws of nature," creating "miracles," but most of the time his perceptions remain the same. What we experience through our senses, then, are the ideas in God's mind; with experience, the ideas in our minds come to resemble those in God's mind, in which case it is said that we are accurately perceiving external reality. "To be is to be perceived," and God perceives the physical world, thus giving it existence; we perceive God's perceptions, thus giving those perceptions life in our minds as ideas. If secondary qualities are understood as ideas whose existence depends on a perceiver, then all reality consists of secondary qualities. Principle of Association According to Berkeley, each sense modality furnishes a different and separate type of information (idea) about an object. It is only through experience that we learn that certain ideas are always associated with a specific object: By sight I have the ideas of light and colours, with their several degrees and variations. By touch I perceive hard and soft, heat and cold, motion and resistance; and of all these more and less either as to quantity or degree. Smelling furnishes me with odours; the palate with tastes; and hearing conveys sounds to the mind in all their variety of tone and composition. And as several of these are observed to accompany each other, they come to be marked by one name, and so to be reputed as one thing. Thus, for example, a certain colour, taste, smell, figure, and consistence having been observed to go together, are accounted one distinct thing, signified by the name apple; other collections of ideas constitute a stone, a tree, a book, and the like sensible things; which as they are pleasing or disagreeable excite the passions of love, hatred, joy, grief, and so forth. (Armstrong, 1965, p. 61) Thus, the objects we name are aggregates of sensations that typically accompany each other. Like Locke, Berkeley accepted the law of contiguity as his associative principle. Unlike Locke, however, he did not focus on fortuitous or arbitrary associations. For Berkeley, all sensations that are consistently experienced together become associated. In fact, for Berkeley, objects were aggregates of sensations and nothing more. Berkeley's Theory of Distance Perception Berkeley agreed with Locke that if a person who was born blind was later able to see, he or she would not be able to distinguish a cube from a triangle. Such discrimination requires the association of visual and tactile experiences. Berkeley went further by saying that such a person would also be incapable of perceiving distance. The reason is the same. For the distance of an object to be judged properly, many sensations must be associated. For example, when viewing an object, the person receives tactile stimulation while walking to it. After several such experiences from the same and from different distances, the visual characteristics of an object alone suggest its distance. That is, when the object is small, it suggests great distance, and when large, it suggests a short distance. Thus, the cues for distance are learned through the process of association. Also, stimulation from other sense modalities becomes a cue for distance for the same reason. Berkeley gave the following example: Sitting in my study I hear a coach drive along the street; I look through the casement and see it; I walk out and enter into it. Thus, common speech would incline one to think I heard, saw, and touched the same thing, to wit, the coach. It is nevertheless certain the ideas intromitted by each sense are widely different, and distinct from each other; but, having been observed constantly to go together, they are spoken of as one and the same thing. By the variation of the noise, I perceive the different distances of the coach, and that it approaches before I look out. Thus, by the ear I perceive distance just after the same manner as I do by the eye. (Armstrong, 1965, pp. 302-303) With his empirical theory of distance perception, Berkeley was refuting the theory held by Descartes and others that distance perception was based on the geometry of optics. According to the latter theory, a triangle is formed with the distance between the two eyes as its base and the object fixated on as its apex. A distant object forms a long, narrow triangle, and a nearby object forms a shorter, broader triangle. Also, the apex angle of the triangle will vary directly with the distance of the object attended to; the greater the distance, the greater the angle and vice versa. The convergence and divergence of the eyes are important to this theory, but only because it is such movement of the eyes that creates the geometry of distance perception. For Berkeley, the problem with the theory of distance perception based on "natural geometry" is that people simply do not perceive distance in that way. The convergence and divergence of the eyes were extremely important in Berkeley's analysis but not because of the visual angles that such movement created. Rather, they were important because the sensations caused by the convergence and divergence of the eyes became associated with other sensations that became cues for distance: And, first, it is certain by experience, that when we look at a near object with both eyes, according as it approaches or recedes from us, we alter the disposition of our eyes, by lessening or widening the interval between the pupils. This disposition or turn of the eyes is attended with a sensation, which seems to me to be that which in this case brings the idea of greater or lesser distance into the mind. (Armstrong, 1965, p. 288) The analysis of the perception of magnitude (size) is the same as for distance perception. In fact, the meaning that any word has is determined by the sensations that typically accompany that word. As we see distance so we see magnitude. And we see both in the same way that we see shame or anger, in the looks of a man. Those passions are themselves invisible; they are nevertheless let in by the eye along with colours and alterations of countenance which are the immediate object of vision, and which signify them for no other reason than barely because they have been observed to accompany them. Without which experience we should no more have taken blushing for a sign of shame than of gladness. (Armstrong, 1965, p. 309) Berkeley's empirical account of perception and meaning was a milestone in psychology's history, because it showed how all complex perceptions could be understood as compounds of elementary sensations such as sight, hearing, and touch. Atherton (1990) provides a more detailed account of Berkeley's theory of perception and a justification for referring to it as revolutionary.

theory of perception

Although he believed that the physiological apparatus of the body provides the mechanisms for sensation, Helmholtz thought that the past experience of the observer is what converts a sensation into a perception. Sensations, then, are the raw elements of conscious experience, and perceptions are sensations after they are given meaning by one's past experiences. In explaining the transformation of sensations into perceptions, Helmholtz relied heavily on the notion of unconscious inference. According to Helmholtz, to label a visual experience a "chair" involves applying a great deal of previous experience, as does looking at railroad tracks converging in the distance and insisting that they are parallel. Similarly, we see moving pictures as moving because of our prior experience with events that create a series of images across the retina. And we learn from experience that perceived distance is inversely related to the size of the retinal image. Helmholtz decided that the perception of depth arises because the retinal image an object causes is slightly different on the two retinas. Previous experience with such retinal disparity causes the unconscious inference of depth. Helmholtz was reluctant to use the term unconscious inference because it suggested the type of mysterious process that would violate his oath, but he could not find a better term. Helmholtz supported his empirical theory of perception with the observation that individuals who are blind at birth and then acquire sight need to learn to perceive, even though all the sensations furnished by the visual apparatus are available. His classic experiments with lenses that distorted vision provided further evidence. Helmholtz had subjects wear lenses that displaced the visual field several inches to the right or left. At first, the subjects would make mistakes in reaching for objects; but after just minutes perceptual adaptation occurred, and even while wearing the glasses, the subjects could again interact accurately with the environment. When the glasses were removed, the subjects again made mistakes for a short time but soon recovered. Helmholtz took several innate categories of thought Kant had proposed and showed how they were derived from experience. Helmholtz and Kant agreed on one important point: The perceiver transforms what the senses provide. For Kant this transformation was accomplished when sensory information was structured by the innate faculties of the mind. For Helmholtz, the transformation occurred when sensory information was embellished by an individual's past experience. With his notion of unconscious inference, Helmholtz came very close to what would later be considered part of psychology. That is, for unconscious inference to convert a sensation into a perception, memories of previous learning experiences must interact with current sensations. Although the processes of learning and memory were later to become central to psychology, Helmholtz never considered himself a psychologist. He believed that psychology was too closely allied with metaphysics, and he wanted nothing to do with metaphysics. Theory of Color Vision Helmholtz performed his work on vision between 1853 and 1868 at the Universities of Konigsberg, Bonn, and Heidelberg, and he published his results in the three-volume Handbook of Physiological Optics (1856-1866). Many years before Helmholtz's birth, Thomas Young (1773-1829), a distinguished scholar with accomplishments ranging from physics to Egyptology, had proposed a theory of color vision very similar to Helmholtz's. Helmholtz changed Young's theory slightly and buttressed it with experimental evidence. The theory we present here has come to be called the Young-Helmholtz theory of color vision (or the trichromatic theory). In 1672 Newton had shown that if white sunlight was passed through a prism, it emerged as a band of colored lights with red on one end of the band, then orange, yellow, green, blue, indigo, and, finally, violet. The prism separated the various wavelengths that together were experienced as white. Early speculation was that a different wavelength corresponded to each color and that different color experiences resulted from experiencing different wavelengths. However, Newton himself saw difficulties with this explanation. When he mixed various wavelengths, it became clear to him that the property of color was not in the wavelengths themselves but in the observer. For example, white is experienced either if all wavelengths of the spectrum are present or if wavelengths corresponding to the colors red and blue-green are combined. Similarly, a person cannot distinguish the sensation of orange caused by the single wavelength corresponding to orange from the sensation of orange caused by mixing red and yellow. The question was how to account for the lack of correspondence between the physical stimuli present and the sensations they cause. Helmholtz's answer was to expand Müller's doctrine of specific nerve energies by postulating three different types of color receptors on the retina. That is, instead of saying that color vision had one specific nerve energy associated with it, as Müller had thought, Helmholtz claimed it involved three separate receptors, each with its own specific energy. It was already known that various combinations of three colors—red, green, and blue-violet, the additive primary colors—could produce all other colors. Helmholtz speculated that there are three types of color receptors corresponding to the three primary colors. If a red light is shown, the so-called red receptors are stimulated, and one has the sensation of red; if a green light is shown, the green receptors are stimulated, and one has the experience of green; and so on. If all these primaries are shown at once, one experiences white. If the color shown is not a primary color, it would stimulate various combinations of the three receptors, resulting in a subjective color experience corresponding to the combination of wavelengths present. For example, presenting a red and a green light simultaneously would produce the subjective color experience of yellow. Also, the same color experience could be caused by several different patterns of the three receptor systems firing. In this way, Helmholtz explained why many physical wavelengths give rise to the same color experience. The Young-Helmholtz theory of color vision was extremely helpful in explaining many forms of color blindness. For example, if a person lacks one or more of the receptor systems corresponding to the primary colors, he or she will not be able to experience certain colors subjectively, even though the physical world has not changed. The senses, therefore, actualize elements of the physical world that otherwise exist only as potential experiences. Helmholtz was continually amazed at the way physiological mechanisms distort the information a person receives from the physical world, but he was even more amazed at the mismatch between physical events and psychological sensations (such as the experience of color). Helmholtz expressed his feelings as follows: The inaccuracies and imperfections of the eye as an optical instrument, and the deficiencies of the image on the retina, now appear insignificant in comparison with the incongruities we have met with in the field of sensation. One might almost believe that Nature had here contradicted herself on purpose in order to destroy any dream of a preexisting harmony between the outer and the inner world. (Kahl, 1971, p. 192)

Auguste Comte

Auguste Comte (1798-1857), born in the French city of Montpellier, grew up in the period of great political turmoil that followed the French Revolution of 1789-1799. In school, Comte was an excellent student but a troublemaker. In 1817, Comte met the social philosopher Henri de Saint-Simon (1760-1825), who converted Comte from an ardent advocate of liberty and equality to a supporter of a more elitist view of society. The two men collaborated on a number of essays, but after a bitter argument, they parted company in 1824. In 1826, Comte began giving lectures in his home over his own positivist philosophy—that is, the attempt to use the methods of the physical sciences to create a science of history and human social behavior. His lectures were attended by a number of illustrious individuals, but after only three lectures, Comte suffered a serious mental collapse. Despite being treated in a hospital for a while, he fell into deep depression and even attempted suicide. He was unable to resume his lectures until 1829. Between 1830 and 1842, his time was spent mainly on writing his six-volume work, Cours de Philosophe Positive (The Course of Positive Philosophy, 1830-1842). Comte's Cours was translated into English by the philosopher-feminist Harriet Martineau (1802-1876) in 1853. As a result of the Cours, Comte began to attract new admirers, among them John Stuart Mill. However, soon after the publication of the Cours, Comte's wife left him. In 1844 he met and fell in love with Clotilde de Vaux, and although she died of tuberculosis shortly thereafter, she remained an influence on his work. In the late 1840s, Comte began writing Système de Politique Positive (System of Positive Politics) in which he introduced his religion of humanity (discussed later). The Système cost Comte most of his influential followers, including Mill. Undaunted, Comte continued to concentrate on his new religion, of which he installed himself as high priest. Comte spent his later years attempting to gain converts to his religion. He even tried to recruit some of the most powerful individuals in Europe, including Czar Nicholas I and the head of the Jesuits. Comte's Positivism According to Comte, the only thing we can be sure of is that which is publicly observable—that is, sense experiences that can be shared with other individuals. The data of science are publicly observable and therefore can be trusted. For example, scientific laws are statements about how empirical events vary together, and once determined, they can be experienced by any interested party. Comte's insistence on equating knowledge with empirical observations was called positivism. As an aside, positivism does not have as its opposite "negativism." It derives from a French term meaning to be put into position or to be placed in the mind by experience, akin to the English verb posit. Comte was a social reformer and was interested in science only as a means of improving society. Knowledge, whether scientific or not, was not important unless it had some practical value. Comte wrote, "I have a supreme aversion to scientific labors whose utility, direct or remote, I do not see" (Esper, 1964, p. 213). According to Comte, science should seek to discover the lawful relationships among physical phenomena. Once such laws are known; they can be used to predict and control events and thus improve life. One of Comte's favorite slogans was "Know in order to predict" (Esper, 1964, p. 213). Comte's approach to science was very much like the one suggested earlier by Francis Bacon. According to both Comte and Bacon, science should be practical and nonspeculative. Comte told his readers that there are two types of statements: "One refers to the objects of sense, and it is a scientific statement. The other is nonsense" (D. N. Robinson, 1986, p. 333). It should be pointed out that positivistic thinking had been around in one form or another since at least the time of the early Greeks: The history of positivism might be said to extend from ancient times to the present. In ancient Greece it was represented by such thinkers as Epicurus, who sought to free men from theology by offering them an explanation of the universe in terms of natural law. ... The cumulative successes of the scientific method in the seventeenth and eighteenth centuries increasingly favored the acceptance of the positivistic attitude among intellectuals. In England, the empirical philosophy, beginning with Francis Bacon and culminating in Hume and John Stuart Mill, became an essential part of the positivist tradition. (Esper, 1964, pp. 212-213) In fact, because all the British empiricists and French sensationalists stressed the importance of sensory experience and avoided metaphysical and theological speculation, they all could be said to have had at least positivistic leanings. The Law of Three Stages According to Comte, societies pass through stages that are defined in terms of the way its members explain natural events. The first stage, and the most primitive, is theological, and explanations are based on superstition and mysticism. In the second stage, which is metaphysical, explanations are based on unseen essences, principles, causes, or laws. During the third and highest stage of development, the scientific description is emphasized over explanation, and the prediction and control of natural phenomena becomes all important. In other words, during the scientific stage, positivism is accepted. Comte used the term sociology to describe the study of how different societies compared in terms of the three stages of development. Auguste Comte Images Wikimedia Commons Comte described the events that characterize the transition from one stage to another in much the same way that Kuhn (1996) described paradigmatic shifts in science. According to Comte, the beliefs characteristic of a particular stage become a way of life for the people within a society. It is only a few of the society's wisest individuals who glimpse the next stage and begin to pave the way for it. There follows a critical period during which a society is in transition between one stage and another. The beliefs characterizing the new stage then become a way of life until the process is repeated. As with a paradigmatic shift in science, there are always remnants of earlier stages in the newly established one. As evidence for his law of three stages, Comte observed that individuals also pass through the same stages: The progress of the individual mind is not only an illustration, but an indirect evidence of that of the general mind. The point of departure of the individual and of the race being the same, the phases of the mind of a man correspond to the epochs of the mind of the race. Now, each of us is aware, if he looks back upon his own history, that he was a theologian in his childhood, a metaphysician in his youth, and a natural philosopher in his manhood. All men who are up to their age can verify this for themselves. (Martineau, 1853/1893, p. 3) Religion and the Sciences By the late 1840s, Comte was discussing positivism as if it were religion. To him, science was all that one needed to believe in and all that one should believe in. He described a utopian society based on scientific principles and beliefs and whose organization was remarkably similar to the Roman Catholic church. However, humanity replaced God, and scientists and philosophers replaced priests. Disciples of the new religion would be drawn from the working classes and especially from among women: The triumph of positivism awaited the unification of three classes: The philosophers, the proletariat, and women. The first would establish the necessary intellectual and scientific principles and methods of inquiry; the second would guarantee that essential connection between reality and utility; the third would impact to the entire program the abiding selflessness and moral resolution so natural to the female constitution. (D. N. Robinson, 1982, pp. 41-42) Comte's religion of humanity was one of the reasons that John Stuart Mill became disenchanted with him. Comte's utopia emphasized the happiness of the group and minimized individual happiness. In Mill's version of utilitarianism, the exact opposite is true. Comte arranged the sciences in a hierarchy from the first developed and most basic to the last developed and most comprehensive as follows: mathematics, astronomy, physics, chemistry, physiology and biology, and sociology. It is of special interest to note that psychology did not appear on Comte's list of sciences. If what is meant by psychology is "the introspective analysis of the mind," then Comte believed that psychology was metaphysical nonsense. Science, for Comte, dealt with what could be publicly observed, and that excluded introspective data. He had harsh words to say about introspection: In order to observe, your intellect must pause from activity; yet it is this very activity you want to observe. If you cannot effect the pause you cannot observe; if you do effect it, there is nothing to observe. The results of such a method are in proportion to its absurdity. After two thousand years of psychological pursuit, no one proposition is established to the satisfaction of its followers. They are divided, to this day, into a multitude of schools, still disputing about the very elements of their doctrine. This internal observation gives birth to almost as many theories as there are observers. We ask in vain for any one discovery, great or small, which has been made under this method. (Martineau, 1853/1893, p. 10) For Comte, two methods, however, were available by which the individual could be studied objectively. One way was to embrace phrenology, which was an effort to relate mental events to brain anatomy and processes (we will discuss phrenology in Chapter 8). Phrenological analysis essentially reduced psychology to physiology. The second way was to study the mind by its products—that is, to study the mind by studying overt behavior, especially social behavior. The study of human social behavior is a second sense in which Comte used the term sociology. So, the first objective way of studying humans reduced psychology to physiology, and the second replaced it with sociology. In the latter case, there was no studying "me," only "us."

Alexander Bain

Born in Aberdeen, Scotland, Alexander Bain (1818-1903) was a precocious child whose father was a weaver; from an early age, Bain himself had to work at the loom to earn money for his education. He was fortunate to be living in perhaps the only country (Scotland) where, at the time, any student showing intellectual promise was provided a university education. He attended Marischal College, which in 1858 became the University of Aberdeen. Following graduation, Bain moved to London, where he worked as a freelance journalist. While in London, Bain joined a lively intellectual circle, which included John Stuart Mill, and the two became close friends. The year before J. S. Mill published his famous System of Logic (1843), Bain assisted him with the revision of the manuscript. Bain also helped J. S. Mill with the annotation of the 1869 edition of James Mill's Analysis. In addition, Bain wrote biographies of both James and J. S. Mill. While in London, Bain tried repeatedly to obtain a university appointment but without success. He eventually distinguished himself with the publication of his two classic texts: The Senses and the Intellect (1855) and Emotions and the Will (1859). These were to be a two-volume work published together, but the publisher delayed printing the second volume (Emotions) for four years because initially the first sold poorly. In any case, in 1860 at the age of 42, with his reputation established, he finally obtained an academic post at the University of Aberdeen. He returned to his alma mater as a professor of logic and rhetoric; he remained there, in this and a variety of honorary positions, for the remainder of his long, productive life. Bain is often referred to as the first true psychologist, and his books The Senses and Emotions are considered by some as the first true textbooks in psychology. These books underwent three revisions each and were standard texts in psychology on both sides of the Atlantic for nearly 50 years. Besides writing these early books, in 1876 he founded Mind, which is generally considered the first journal devoted primarily to psychological questions—and it remains one of the most prestigious journals in philosophical psychology even today. Like Hartley before him, and many that would follow, Bain's primary goal was to describe the physiological correlates of mental and behavioral phenomena. In preparation for writing The Senses, Bain made it a point to digest the most current information on neurology, anatomy, and physiology. He then attempted to show how these biological processes were related to psychological processes, a practice many psychologists have followed since. After Bain, exploring the relationships between physiological and psychological processes became an integral part of psychology. Bain was the first to attempt to relate real physiological processes to psychological phenomena. Hartley had earlier attempted to do this, but his physiological principles were largely hypothetical constructs. Laws of Association For Bain, the mind had three components: feeling, volition, and intellect. The intellect was explained by the laws of association. Like the other British empiricists, Bain stressed the law of contiguity as the basic associative principle. According to Bain (1855/1977a), the law of contiguity applied to sensations, ideas, actions, and feelings: Actions, sensations, and states of feeling, occurring together or in close succession, tend to grow together, or cohere, in such a way that, when any one of them is afterwards presented to the mind, the others are apt to be brought up in idea. (p. 318) As was common among the British empiricists, Bain supplemented the law of contiguity with the law of frequency. What was unusual about Bain's presentations of the laws of contiguity and frequency was his suggestion that both laws had their effects because of neurological changes, or what we would now call changes in the synapses between neurons: "For every act of memory, every exercise of bodily aptitude, every habit, recollection, train of ideas, there is a specific grouping, or co-ordination, of sensation and movements, by virtue of specific growth in the cell junctions" (Bain, 1873/1875, p. 91). Given our modern understanding of neurotransmitters, Bain seems to have been on to something. Like John Stuart Mill, Bain also accepted the law of similarity as one of his associative principles. Whereas the law of contiguity associates events that are experienced at the same time or in close succession, the law of similarity explains why events separated in time can come to be associated. That is, the experience of an event elicits memories of similar events even if those similar events were experienced under widely different times and circumstances. To the traditional laws of association, Bain added two of his own: the law of compound association and the law of constructive association. The law of compound association states that associations are seldom links between one idea and another. Rather, an idea is usually associated with several other ideas either through contiguity or similarity. When this is true, we have a compound association. With such associations, sometimes experiencing one element, or perhaps even a few elements, in the compound will not be enough to elicit the associated idea. However, if the idea is associated with many elements and several of those elements are present, the associated idea will be recalled. Bain thought that this law suggested a way to improve memory and recall: "Past actions, sensations, thoughts, or emotions, are recalled more easily, when associated either through contiguity or through similarity, with more than one present object or impression" (1855/1977a, p. 545). With his law of constructive association, Bain inserted a creative element into associationism in much the way Hume had done. In discussing his law of constructive association, Bain said, "By means of association the mind has the power to form new combinations or aggregates different from any that have been presented to it in the course of experience" (Bain, 1855/1977a, p. 571). In other words, the mind can rearrange memories of various experiences into an almost infinite number of combinations. Bain thought that the law of constructive association accounted for the creativity shown by poets, artists, inventors, and the like. Voluntary Behavior In his analysis of voluntary behavior, Bain made an important distinction between reflexive behavior and spontaneous activity. Reflexive behavior occurred automatically in response to some external stimulus because of the structure of an organism's nervous system. Conversely, organisms sometimes simply act spontaneously. In the terminology of modern Skinnerians, Bain was saying that some behavior is emitted rather than elicited. Spontaneous activity is one ingredient of voluntary behavior; the other ingredient is hedonism. Like both Mills, Bain was also strongly influenced by Jeremy Bentham. Bain accepted the fundamental importance of pleasure and pain in his psychology and especially in his analysis of voluntary behavior. Apparently, the thought of combining spontaneous behavior and the emotions of pleasure and pain in his analysis occurred to Bain when, while accompanying a shepherd, he observed the first few hours of the life of a lamb. He noted that the lamb's initial movements appeared to be completely random relative to its mother's teat, but as chance contact occurred with the mother's skin and eventually with her teat, the lamb's behavior became increasingly "purposive." Six or seven hours after birth the animal had made notable progress. ... The sensations of sight began to have a meaning. In less than twenty-four hours, the animal could at the sight of the mother ahead, move in the forward direction at once to come up to her, showing that a particular image had now been associated with a definite movement; the absence of any such association being most manifest in the early movements of life. It could proceed at once to the teat and suck, guided only by its desire and the sight of the object. (Bain, 1855/1977a, p. 406) Bain (1859/1977b) used hedonism to explain how spontaneous activity is converted into voluntary behavior: I cannot descend deeper into the obscurities of the cerebral organization than to state as a fact, that when pain co-exists with an accidental alleviating movement, or when pleasure co-exists with a pleasuresustaining movement, such movements become subject to the control of the respective feelings which they occur in company with. Throughout all the grades of sentient existence, wherever any vestiges of action for a purpose are to be discerned, this link must be presumed to exist. Turn it over as we may on every side, some such ultimate connexion between the two great primary manifestations of our nature—pleasure and pain, with active instrumentality—must be assumed as the basis of our ability to work out our ends. (p. 349) Alexander Bain Images Courtesy of the National Library of Medicine With voluntary behavior, we still have the laws of association at work. Some spontaneous actions become associated with pleasure and therefore repeated; others are associated with pain and therefore reduced in frequency of occurrence. Also, in accordance with the law of frequency, the tendencies to repeat pleasurable responses or to avoid painful ones increase with the frequency of pleasurable or painful consequences. As was the case earlier with Hartley, it is important to note that for Bain, voluntary did not mean "free." So-called voluntary behavior was as deterministically controlled as reflexive behavior; it was just controlled differently. Bain said, "The actions of the will, or volition ... I consider to be nothing else than action stimulated, and guided, by feeling" (D. N. Robinson, 1977, p. 72). To summarize, Bain explained the development of voluntary behavior as follows: When some need such as hunger or the need to be released from confinement occurs, there is random or spontaneous activity. Some of these random movements will produce or approximate conditions necessary for satisfying the need, and others will not. The activities that bring need satisfaction are remembered. The next time the organism is in a similar situation, it will perform the activities that previously brought about need satisfaction. As such, actions that are performed because of their previous effectiveness in a given situation are voluntary rather than reflexive. Bain then essentially described trial-and-error learning, which was to become so important to Thorndike several years later. He also anticipated Skinner's operant conditioning. According to Skinner, operant behavior is simply emitted by an organism; that is, it is spontaneous. Once emitted, however, operant behavior is under the control of its consequences. Responses resulting in pleasurable consequences (reinforcement) tend to be repeated under similar circumstances, and responses resulting in painful consequences (punishment) tend not to be. With his effort to synthesize what was known about physiology with associationism and his treatment of voluntary behavior, Bain brought psychology to the very brink of becoming an experimental science.

David Hume

Born in Edinburgh, Scotland, David Hume (1711-1776) was educated at the University of Edinburgh, where he studied law and commerce but left without a degree. Living off a modest inheritance (and perhaps fleeing a paternity claim), Hume moved to La Fleche in France, where Descartes had studied as a young man. It was at La Fleche that Hume, before the age of 28, wrote his most famous work, Treatise of Human Nature, Being an Attempt to Introduce the Experimental Method of Reasoning into Moral Subjects, the first volume of which was published in 1739 and the second volume in 1740. About his Treatise, Hume said, "It fell dead-born from the press, without reaching such distinction as even to excite a murmur among the zealots" (Flew, 1962, p. 305). In 1742 Hume published his Philosophical Essays, which was well received. Hume was always convinced that his Treatise was ignored because of its manner of presentation rather than its content, and in 1748 he published an abbreviated version of the Treatise titled An Enquiry Concerning Human Understanding. Much of what follows is based on the posthumous 1777 edition of the Enquiry. Unlike many of the other philosophers of his time, Hume was never a university professor. He worked briefly in commerce before becoming a private tutor, a librarian, and a professional diplomatic secretary. He was nominated for an academic position twice, but the opposition of the Scottish clergy denied him the posts. Hume was skeptical of most religious beliefs, and friction with the church was a constant theme in his life. About religion Hume said, "The whole is a riddle, an enigma, an inexplicable mystery. Doubt, uncertainty, suspense of judgment appear the only result of our most accurate scrutiny, concerning the subject" (Yandell, 1990, p. xiv). Indeed, Hume argued that religion was both irrational and impractical: In the first place, fear of God and the expectations of an afterlife have less day-to-day effect upon our conduct than is generally supposed. In the second place, religions do positive harm. They invent mortal sins like suicide, which have no natural depravity, and they create "frivolous merits" which partake in no natural good, like abstaining from certain foods or attending ceremonies. Moreover, ... religions result in cruel persecutions, bigotry, strife between sects or between sects and civil power, and the hunting down of unorthodox opinions. (Gaskin, 1998, p. xvii) Toward the end of his life, Hume left the manuscript for his Dialogues Concerning Natural Religion with his friend, the famous economist Adam Smith, with the understanding that Smith would arrange for its publication. However, when Hume died in 1776, Smith, perhaps fearing reprisal against himself, advised against the publication of the book. It did not appear until 1779 and then without the publisher's name (Steinberg, 1977). Hume's Goal According to Hume, "It is evident, that all the sciences have a relation, greater or less, to human nature; and that, however wide any of them may seem to run from it, they still return back by one passage or another" (Flew, 1962, p. 172). Under the heading of science, Hume included such topics as mathematics, natural philosophy (physical science), religion, logic, morals, criticism, and politics. In other words, as with Locke before, it was seen that all important matters reflect human nature, and understanding that nature is therefore essential. In developing his science of man, Hume followed in the empirical tradition of Occam, Bacon, Hobbes, Locke, and Berkeley: "As the science of man is the only solid foundation for the other sciences, so, the only solid foundation we can give to this science itself must be laid on experience and observation" (Flew, 1962, p. 173). Hume, however, was very impressed by the achievements of Newton, and he wanted to do for "moral philosophy" what Newton had done for "natural philosophy." Hume believed that he could bring about a reform in moral philosophy comparable to the Newtonian revolution in physics by following the very method of inquiry that Newton had followed. He aspired to be the Newton of the moral sciences. His achievement would in fact surpass Newton's. The science of man is not only the indispensable foundation of natural philosophy, but is also of "greater importance" and "much superior in utility." (E. F. Miller, 1971, p. 156) In Hume's day, moral philosophy referred roughly to what we now call the social sciences and natural philosophy referred to what we now call the physical sciences. Besides being an empirical science, the science of man would also be an "experimental" science. However, Hume did not employ experiments in his science of man the same way that they were employed by physical scientists. For the physical scientists, an experiment involved purposely manipulating some environmental variable and noting the effect of that manipulation on another variable. Both variables were observable and measurable. As we will see, the major determinants of behavior in Hume's system were cognitive and not directly observable. For Hume, the term experience meant mental experience. What, then, could the term experiment mean to Hume? By experiment, Hume meant careful observation of how experiences are related to one another and how experience is related to behavior. Hume noted that his experimental science of human nature would be different from the physical sciences, but different did not mean "inferior." Hume's goal, then, was to combine the empirical philosophy of his predecessors with the principles of Newtonian science and, in the process, create a science of human nature. It is ironic that with all of Hume's admiration for Newton, Hume tended to use the Baconian inductive method more so than the Newtonian deductive method. The major thrust of Hume's approach was to make careful observations and then carefully generalize from those observations. Hume occasionally did formulate a hypothesis and test it against experience, but his emphasis was clearly on induction rather than deduction. David Hume Images Georgios Kollidas/ Shutterstock.com Impressions and Ideas Like the other empiricists that preceded him, Hume believed that the contents of the mind came only from experience. Also, like his predecessors, he believed that experience (perception) could be stimulated by either internal or external events. Hume agreed with Berkeley that we never experience the physical directly and can have only perceptions of it: It is a question of fact, whether the perceptions of the senses be produced by external objects, resembling them: How shall this question be determined? By experience surely; as all other questions of a like nature. But here experience is, and must be entirely silent. The mind has never any thing present to it but the perceptions, and cannot possibly reach any experience of their connexion with objects. The supposition of such a connexion is, therefore, without any foundation in reasoning. (Steinberg, 1977, p. 105) Hume did not deny the existence of physical reality; he denied only the possibility of knowing it directly. Although the ultimate nature of physical reality must necessarily remain obscure, its existence, according to Hume, must be assumed in all rational deliberations: "Tis in vain to ask, Whether there be body or not? That is a point, which we must take for granted in all our reasonings" (Mossner, 1969, p. 238). Hume distinguished between impressions, which were strong, vivid perceptions, and ideas, which were relatively weak perceptions: All the perceptions of the human mind resolve themselves into two distinct kinds, which I shall call impressions and ideas. The difference betwixt these consists in the degrees of force and liveliness, with which they strike upon the mind, and make their way into our thought or consciousness. Those perceptions which enter with most force and violence, we may name impressions; and, under this name, I comprehend all our sensations, passions, and emotions, as they make their first appearance in the soul. By ideas, I mean the faint images of these in thinking and reasoning. (Flew, 1962, p. 176) Simple and Complex Ideas and the Imagination Hume made the same distinction that Locke had made between simple ideas and complex ideas. Although, according to Hume, all simple ideas were once impressions. Once ideas exist in the mind, they can be rearranged in an almost infinite number of ways by the imagination: Nothing is more free than the imagination of man; and though it cannot exceed that original stock of ideas, furnished by the internal and external senses, it has unlimited power of mixing, compounding, separating, and dividing these ideas, in all the varieties of fiction and vision. It can feign a train of events, with all the appearance of reality, ascribe to them a particular time and place, conceive them as existent, and paint them out to itself with every circumstance, that belongs to any historical fact, which it believes with the greatest certainty. Wherein, therefore, consists the difference between such a fiction and belief? It lies not merely in any peculiar idea, which is annexed to such a conception as commands our assent, and which is wanting to every known fiction. For as the mind has authority over all its ideas, it could voluntarily annex this particular idea to any fiction, and consequently be able to believe whatever it pleases; contrary to what we find by daily experience. We can, in our conception, join the head of a man to the body of a horse; but it is not in our power to believe, that such an animal has ever really existed. (Steinberg, 1977, p. 31) It is interesting to note that, for Hume, ideas that have been consistently experienced together create the belief that one will follow the other. Such beliefs, for us, constitute reality. Ideas simply explored by the imagination do not have a history of concordance, and therefore, they do not elicit a strong belief that one belongs to the other (like a blue banana). What distinguishes fact from fantasy, then, is the degree of belief that one idea belongs with another, and such belief is determined only by experience. Again, the contents of the mind come only from experience, but once in the mind, ideas can be rearranged at will. Therefore, we can ponder thoughts that do not necessarily correspond to reality. Hume gave the idea of God as an example: "The idea of God, as meaning an infinitely intelligent, wise, and good Being, arises from reflecting on the operations of our own mind, and augmenting, without limit, those qualities of goodness and wisdom" (Steinberg, 1977, p. 11). To understand Hume, it is important to remember that all human knowledge is based on simple impressions. Hume stated this fact in the form of a general proposition: "That all our simple ideas in their first appearance, are derived from simple impressions, which are correspondent to them, and which they exactly represent" (Flew, 1962, p. 178). The Association of Ideas If ideas were combined only by the imagination, they would be "loose and unconnected," and chance alone would join them together. Also, the associations among ideas would be different for each person because there would be no reason for them to be similar. Hume, however, observed that this was not the case. Rather, a great deal of similarity exists among the associations of all humans, and this similarity must be explained. Hume considered his account of the association of ideas as one of his greatest achievements: "If anything can entitle the author to so glorious a name as that of an 'inventor,' it is the use he makes of the principle of the association of ideas, which enters into most of his philosophy" (Flew, 1962, p. 302). Hume seems to have downplayed the fact that the laws of association go back at least as far as Aristotle and were employed by Hobbes, to a lesser extent by Locke, and extensively by Berkeley. It is true, however, that Hume depended on the principles of association to the point where his philosophy can be said to exemplify associationism. For Hume, the laws of association do not cement ideas together so that their association becomes immutable. As we have already seen, the imagination can reform the ideas in the mind into almost any configuration. Rather, Hume saw the laws of association as a "gentle force," which creates certain relations as opposed to others. Hume discussed three laws of association that influence our thoughts. The law of resemblance states that our thoughts run easily from one idea to other similar ideas, such as when thinking of one friend stimulates the recollection of other friends. The law of contiguity states that when one thinks of an object, there is a tendency to recall other objects that were experienced at the same time and place as the object being pondered, such as when remembering a gift stimulates thoughts of the giver. The law of cause and effect states that when we think of an outcome (effect), we tend to also think of the events that typically precede that outcome, such as when we see lightning and consequently expect thunder. According to Hume, "There is no relation which produces a stronger connexion in the fancy, and makes one idea more readily recall another, than the relation of cause and effect betwixt their objects" (Mossner, 1969, pp. 58-59). Because Hume considered cause and effect to be the most important law of association, we will examine it in more detail. Analysis of Causation From the time of Aristotle through Scholasticism and to the science of Hume's day, it was believed that certain causes by their very nature produced certain effects. To make the statement "A causes B" was to state something of the essences of A and B; that is, there was assumed to be a natural relation between the two events so that knowing A would allow for the prediction of B. This prediction could be made from knowing the essences of A and B without having observed the two events together. Hume completely disagreed with this analysis of causation. For him, we can never know that two events occur together unless we have experienced them occurring together. In fact, for Hume, a causal relationship is a consistently observed relationship and nothing more. Causation, then, is not a logical necessity; it is a psychological experience. It was not Hume's intention to deny the existence of causal relationships and thereby undermine science, which searches for them. Rather, Hume attempted to specify what is meant by a causal relationship and how beliefs in such relationships develop. Hume described the observations that need to be made in order to conclude that two events are causally related: The cause and effect must be contiguous in space and time. The cause must be prior to the effect. There must be a constant union betwixt the cause and effect. It is chiefly this quality that constitutes the relation. The same cause always produces the same effect, and the same effect never arises but from the same cause. (Flew, 1962, p. 216) Thus, it is on the basis of consistent observations that causal inferences are drawn. Predictions based on such observations assume that what happened in the past will continue to happen in the future, but there is no guarantee of that being the case. What we operate with is the belief that relationships observed in the past will continue to exist in the future. Also, even if all conditions listed above are met, we could still be incorrect in drawing a causal inference, such as when we conclude that the sunset causes the sunrise because one always precedes the other and one never occurs without the other first occurring. According to Hume then, it is not rationality that allows us to live effective lives, it is cumulative experience, or what Hume called custom: Custom, then, is the great guide of human life. It is that principle alone, which renders our experience useful to us, and makes us expect, for the future, a similar train of events with those which have appeared in the past. Without the influence of custom, we should be entirely ignorant of every matter of fact, beyond what is immediately present to the memory and senses. We should never know how to adjust means to ends, or to employ our natural powers in the production of any effect. There would be an end at once of all action, as well as of the chief part of speculation. (Steinberg, 1977, p. 29) Analysis of the Mind and the Self As mentioned in Chapter 1, a persistent problem throughout psychology's history has been to account for the unity of experience. Although we are confronted with a myriad of changing situations, our experience maintains a continuity over time and across conditions. The entities that most often have been postulated to explain the unity of experience are a mind or a self. All beliefs, according to Hume, result from recurring experiences and are explained by the laws of association. All metaphysical entities, such as God, soul, and matter, are products of the imagination as are the so-called laws of nature. Hume extended his skepticism to include the concept of mind that was so important to many philosophers, including Descartes, Locke, and Berkeley. According to Hume, the "mind" is no more than the perceptions we are having at any given moment: "We may observe, that what we call a mind, is nothing but a heap or collection of different perceptions, united together by certain relations, and suppos'd, tho' falsely, to be endow'd with a perfect simplicity and identity" (Mossner, 1969, p. 257). Just as there is no mind independent of perceptions, there is also no self independent of perceptions: For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never can catch myself at any time without a perception, and never can observe anything but the perception. When my perceptions are removed for any time, as by sound sleep, so long am I insensible of myself, and may truly be said not to exist. And were all my perceptions removed by death, and could I neither think, nor feel, nor see, nor love, nor hate, after the dissolution of my body, I should be entirely annihilated. (Flew, 1962, p. 259) The Emotions and Behavior Hume pointed out that throughout human history, humans have had the same passions (emotions) and that these passions have motivated similar behaviors: It is universally acknowledged, that there is a great uniformity among the actions of men, in all nations and ages, and that human nature remains still the same, in its principles and operations. The same motives always produce the same actions: The same events follow from the same causes. Ambition, avarice, self-love, vanity, friendship, generosity, public spirit; these passions, mixed in various degrees, and distributed through society, have been, from the beginning of the world, and still are, the source of all the actions and enterprises, which have ever been observed among mankind. (Steinberg, 1977, p. 55) Hume noted that even though all humans possess the same passions, they do not do so in the same degree and, because different individuals possess different patterns of passions, they will respond differently to situations. The pattern of passions that a person possesses determines his or her character, and it is character that determines behavior. It is a person's character that allows for his or her consistent interactions with people. It is through individual experience that certain impressions and ideas become associated with certain emotions. It is the passions elicited by these impressions and ideas, however, that will determine one's behavior. This is another application of the laws of association, only in this case the associations are between various experiences and the passions and between passions and behavior. In general, we can say that individuals will seek experiences associated with pleasure and avoid experiences associated with pain. The fact that human behavior is at times inconsistent does not mean that it is free any more than the weather being sometimes unpredictable means that the weather is free: The internal principles and motives may operate in a uniform manner, notwithstanding these seeming irregularities; in the same manner as the winds, rain, clouds, and other variations of the weather are supposed to be governed by steady principles; though not easily discoverable by human sagacity and enquiry. (Steinberg, 1977, p. 58) Humans learn how to act in different circumstances the same way that nonhuman animals do—through the experience of reward and punishment. In both cases, reasoning ability has nothing to do with it: This is ... evident from the effects of discipline and education on animals, who, by the proper application of rewards and punishments, may be taught any course of action, the most contrary to their natural instincts and propensities. Is it not experience, which renders a dog apprehensive of pain, when you menace him, or lift up the whip to beat him? Is it not even experience, which makes him answer to his name, and infer, from such an arbitrary sound, that you mean him rather than any of his fellows, and intend to call him, when you pronounce it in a certain manner, and with a certain tone and accent? ... Animals, therefore, are not guided in these inferences by reasoning: Neither are children: Neither are the generality of mankind, in their ordinary actions and conclusions: Neither are philosophers themselves, who, in all the active parts of life, are, in the main, the same with the vulgar, and are governed by the same maxims. (Steinberg, 1977, pp. 70-71) It is not ideas or impressions that cause behavior but the passions associated with those ideas or impressions. It is for this reason that Hume said, "We speak not strictly and philosophically when we talk of the combat of passion and of reason. Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them" (Mossner, 1969, p. 462). Hume's Influence Like Locke, Hume vastly increased the importance of what we now call psychology. In fact, he reduced politics, philosophy, religion, and science to psychology. Everything that humans know is learned from experience. All beliefs are simply expectations that events that have been correlated in the past will remain correlated in the future. Such beliefs are not rationally determined, nor can they be rationally defended. They result from experience, and we can have faith only that what we learned from experience will be applicable to the future. According to Hume then, humans can be certain of nothing. It is for this reason that Hume is sometimes referred to as the supreme Skeptic. Hume accepted only two types of knowledge: demonstrative and empirical. Demonstrative knowledge relates ideas to ideas such as in mathematics. Such knowledge is true only by accepted definitions and does not necessarily say anything about facts or objects outside the mind. Demonstrative knowledge is entirely abstract and entirely the product of the imagination. This is not to say that demonstrative knowledge is useless, because the relations gleaned in arithmetic, algebra, and geometry are of this type and represent clear and precise thinking. Such knowledge, however, is based entirely on deduction from one idea to another; therefore, it does not necessarily say anything about empirical events. Conversely, empirical knowledge is based on experience, and it alone can furnish knowledge that can effectively guide our conduct in the world. According to Hume, for knowledge to be useful, it must be either demonstrative or empirical; if it is neither, it is not real knowledge and therefore is useless: When we run over libraries, persuaded of these principles, what havoc must we make? If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: For it can contain nothing but sophistry and illusion. (Steinberg, 1977, p. 114) David Hartley Images

David Hartley

David Hartley (1705-1757), the son of a Yorkshire clergyman, had completed his training as a minister at the University of Cambridge before an interest in biology caused him to seek a career as a physician. Hartley remained deeply religious all his life, believing that understanding natural phenomena increased one's faith in God. It took several years for Hartley to write his long and difficult Observations on Man, His Frame, His Duty, and His Expectations (1749). This ponderous book is divided into two parts; the first part (concerning the human frame) contains his contributions to psychology, and the second (concerning the duty and expectations of humans) is almost totally theological. Hartley's Goal Although Hartley's Observations appeared several years after Hume's Treatise on Human Nature (1739-1740), Hartley had been working on his book for many years and appears not to have been influenced by Hume. His two major influences were Locke and Newton. Hartley accepted Newton's contention that nerves are solid (not hollow, as Descartes had believed) and that sensory experience caused vibrations in the nerves. These vibrations were called impressions. The impressions reach the brain and cause vibrations in the "infinitesimal, medullary particles," which cause sensations. Newton had also observed that vibrations in the brain show a certain inertia; that is, they continue vibrating after the impressions causing them cease. This, according to Newton, was why we see a whirling piece of coal as a circle of light. For Hartley, it was the lingering vibrations in the brain following a sensation that constituted ideas. Ideas, then, were faint replications of sensations. Hartley's goal was to synthesize Newton's conception of nerve transmission by vibration with previous versions of empiricism, especially Locke's. This union of the most pressing questions of philosophy and the most contemporary ideas of physiology would become a hallmark of psychology. Hartley's Explanation of Association As we have seen, Hartley believed that sense impressions produced vibrations in the nerves, which traveled to the brain and caused similar vibrations in the "medullary substance" of the brain. The brain vibrations caused by sense impressions give rise to sensations. After sense impressions cease, there remain in the brain diminutive vibrations that Hartley called vibratiuncles. It is the vibratiuncles that correspond to ideas. Ideas, then, are weaker copies of sensations. Vibratiuncles are like the brain vibrations associated with sensations in every way except they (the vibratiuncles) are weaker. So much for how sense impressions cause ideas; now the question is, how do ideas become associated? Any Sensations A, B, C, [etc.] by being associated with one another a sufficient Number of Times, get such a Power over the corresponding Ideas a, b, c, [etc.] that any one of the Sensations A, when impressed alone, shall be able to excite in the Mind, b, c, [etc.] the Ideas of the rest. (Hartley, 1749/1834, p. 41) Hartley's notion that experiences consistently occurring together are recorded in the brain as an interrelated package and that experiencing one element in the package will make one conscious of the entire package is remarkably modern as we will see with Donald Hebb in Chapter 18. Although Hartley distinguished between simultaneous and successive associations, both are examples of the law of contiguity. Successive experiences follow each other closely in time, and simultaneous events occur at the same time; both exemplify a type of contiguity. What made Hartley's account of association significantly different from previous accounts was his attempt to correlate all mental activity with neurophysiological activity. Unlike Locke, who believed that complex ideas are formed from simple ideas via reflection, Hartley believed that all complex ideas are formed automatically by the process of association. For Hartley, there were no active mind processes involved at all. Simple ideas that are associated by contiguity form complex ideas. Similarly, complex ideas that are associated by contiguity become associated into "decomplex" ideas. As simple ideas combine into complex ideas and complex ideas combine to form decomplex ideas, it may be difficult to remember the individual sensations that make up such ideas. However, for Hartley, all ideas, no matter how complex, are made up of sensations. Furthermore, association is the only process responsible for converting simple ideas into complex ones. Laws of Association and Behavior Hartley attempted to show that so-called voluntary behavior developed from involuntary, or reflexive, behavior. He used the law of association to explain how involuntary behavior gradually becomes voluntary and then becomes almost involuntary (automatic) again. Involuntary behavior occurs reflexively in response to sensory stimulation. Voluntary behavior occurs in response to one's ideas or to stimuli not originally associated with the behavior, and voluntary behavior itself can become so habitual that it too becomes automatic, not unlike involuntary behavior. The basic assumption in Hartley's explanation is that all behavior is at first involuntary and gradually becomes voluntary through the process of association. In the following example, we can see that Hartley's (1749/1834) account of the development of voluntary behavior comes very close to what was later called a conditioned reflex: The fingers of young children bend upon almost every impression which is made upon the palm of the hand, thus performing the action of grasping, in the original automatic manner. After a sufficient repetition of the motory vibrations which concur in this action, their vibratiuncles are generated, and associated strongly with other vibrations or vibratiuncles, the most common of which, I suppose, are those excited by the sight of a favourite plaything which the child uses to grasp, and hold in his hand. He ought, therefore, according to the doctrine of association, to perform and repeat the action of grasping, upon having such a plaything presented to his sight. But it is a known fact, that children do this. By pursuing the same method of reasoning, we may see how, after a sufficient repetition of the proper associations, the sound of the words grasp, take hold, [etc.] the sight of the nurse's hand in a state of contraction, the idea of a hand, and particularly of the child's own hand, in that state, and innumerable other associated circumstances, i.e. sensations, ideas, and motions, will put the child upon grasping, till, at last, that idea, or state of mind which we may call the will to grasp, is generated, and sufficiently associated with the action to produce it instantaneously. It is therefore perfectly voluntary in this case; and, by the innumerable repetitions of it in this perfectly voluntary state, it comes, at last, to obtain a sufficient connection with so many diminutive sensations, ideas, and motions, as to follow them in the same manner as originally automatic actions do the corresponding sensations, and consequently to be automatic secondarily. And, in the same manner, may all the actions performed with the hands be explained, all those that are very familiar in life passing from the original automatic state through the several degrees of voluntariness till they become perfectly voluntary, and then repassing through the same degrees in an inverted order, till they become secondarily automatic on many occasions, though still perfectly voluntary on some, viz. whensoever an express act of the will is exerted. (pp. 66-67) Thus, behavior is first involuntary, and then it becomes increasingly voluntary as, through the process of association, more and more stimuli become capable of eliciting the behavior. Finally, when performing the voluntary action becomes habitual, it is said to be "secondarily automatic." It should be clear that Hartley did not employ the term voluntary to mean "freely chosen." For him, voluntary behavior is determined by the law of contiguity and, therefore, no free choice is involved. We see in Hartley's explanation much that would later become part of modern learning theory. Hartley's Influence It was Hartley's disciple Joseph Priestley (1733-1804), the famous chemist and codiscoverer of oxygen, who explored the implications of Hartley's analysis for education. Priestley also wrote Hartley's Theory of the Human Mind: On the Principle of the Association of Ideas (1775), which did much to promote the popularity of Hartley's work not only among scientists but other intellectuals as well. Hartley took the speculations concerning neurophysiology of his time and used them in his analysis of association. His effort was the first major attempt to explain the neurophysiology of thought and behavior since Descartes. The neurophysiological mechanisms that Hartley postulated were largely wrong, but as more became known about neural transmission and brain mechanisms, the more accurate information replaced the older fictions. Thus, Hartley started the search for the biological correlates of mental events that has continued to the present. Earlier in this chapter, associationism was defined as any psychological theory that has association as its fundamental principle. Under this definition, neither Hobbes's nor Locke's philosophies qualify. Hume probably qualifies, but "Hartley ... was the first man to whom the term associationist can be applied without qualification" (Drever, 1968, p. 14). Hartley's brand of associationism became highly influential and was the authoritative psychological account for about 80 years, or until the time of James Mill.

Ernst Heinrich Weber

Ernst Heinrich Weber (1795-1878), a contemporary of Johannes Müller, was born in Wittenberg, the son of a theology professor. He was the third of 13 children. Weber obtained his doctorate from the University of Leipzig in 1815 and taught there until his retirement in 1871. Weber was a physiologist who was interested in the senses of touch and kinesthesis (muscle sense). Most of the research on sense perception before Weber had been confined to vision and audition. Weber's research consisted largely in exploring skin and muscle sensations. Weber was among the first to demonstrate that the sense of touch is not one but several senses. For example, what is ordinarily called the sense of touch includes the senses of pressure, temperature, and pain. Weber also provided convincing evidence that there is a muscle sense. It was in regard to the muscle sense that Weber performed his work on just noticeable differences, which we will consider shortly. Ernst Heinrich Weber Images Courtesy of the National Library of Medicine Touch and Kinesthesis For the sensation of touch, Weber attempted to determine the least spatial separation at which two points of touch on the body could be discriminated. Using a compass-like device consisting of two points, he simultaneously applied two points of pressure to a subject's skin. The smallest distance between the two points at which the subject reported sensing two points instead of one was called the two-point threshold. In his famous book One Touch: Anatomical and Physiological Notes (1834), Weber provided charts of the entire body with regard to the two-point threshold. He found the smallest two-point threshold on the tongue (about 1 millimeter) and the largest in the middle of the back (about 60 millimeters). He assumed that the differences in thresholds at different places on the body resulted from the anatomical arrangement of the sense receptors for touch—the more receptors, the finer the discrimination. Within the history of psychology, Weber's research on the muscle sense, or kinesthesis, is even more important than his research on touch. It was while investigating kinesthesis that Weber ran his important weight-discrimination experiments. In general, he sought to determine the smallest difference between two weights that could be discriminated. To do this, he had his subjects lift one weight (the standard), which remained the same during a series of comparisons, and then lift other weights. The subject was to report whether the varying weights were heavier, lighter, or the same as the standard weight. He found that when the variable weights were only slightly different from the standard, they were judged to be the same as the standard. Through a series of such comparisons, Weber was able to determine the just noticeable difference (jnd) between the standard and the variable weight. It is important to note that, although Weber did not label them as such, jnds were psychological experiences (conscious sensations). Weber ran the basic weight-discrimination experiment under two conditions. In one condition, the weights were placed on the subject's hands while the hands were resting on a table. In this condition, the subject's judgments were made primarily on the basis of tactile sensations. In the second condition, the subject lifted the hands with the weights on them. In this condition, the subject's judgments were made on the basis of both tactile and kinesthetic sensations. It was found that subjects could detect much smaller weight differences when they lifted the weights than they could when the weights were simply placed on their hands. Weber thought that it was the involvement of kinesthesis in the lifted-weight condition that provided the greater sensitivity to weight differences. Judgments Are Relative, Not Absolute During his research on kinesthesis, Weber made the startling observation that the jnd is a constant fraction of the standard weight. For lifted weights, that fraction is 1/40; for nonlifted weights, it is 1/30. Using lifted weights as an example, if the standard weight is 40 grams, the variable weight would have to be 41 grams to be judged heavier or 39 grams to be judged lighter than the standard. If the standard weight is 160 grams, the variable weight would have to be 164 grams or 156 grams to be judged heavier or lighter, respectively, than the standard. Weber then aligned himself with the large number of scientists and philosophers who found that there was not a simple one-to-one correspondence between what is present physically and what is experienced psychologically. Weber observed that discrimination does not depend on the absolute difference between two weights but on the relative difference between the two or the ratio of one to the other. Weber extended his research to other sense modalities and found evidence that suggested that there is a constant fraction corresponding to jnds for each sense modality. The finding that jnds corresponded to a constant fraction of a standard stimulus was later called Weber's law, and it can be considered the first quantitative law in psychology's history. This was the first statement of a systematic relationship between physical stimulation and a psychological experience. But because Weber was a physiologist, psychology was not his primary concern. It was Fechner who realized the implications of Weber's work for psychology and who saw in it the possible resolution of the mind-body problem.

Thomas Hobbes

Following in the tradition of William of Occam and Francis Bacon, Thomas Hobbes (1588-1679) is sometimes referred to as the founder of British empiricism. Hobbes was educated at Oxford and was friends with both Galileo and Descartes. He also served as Bacon's secretary for a short time. Hobbes was born in Malmesbury, Wiltshire, England. He often joked that he and fear were born twins because his mother attributed his premature birth to her learning of the approaching Spanish Armada. Hobbes's father, an Anglican vicar, got into a fight in the doorway of his church and thereafter disappeared. The care of his children was left to a prosperous brother who eventually provided Hobbes with an Oxford education, but Hobbes claimed that he learned little of value from that venture. Hobbes noted that Oxford had a strong Puritan tradition but also had an abundance of "drunkenness, wantonness, gaming, and other such vices" (Peters, 1962, p. 7). Hobbes lived a long, productive, and influential life. He played tennis until the age of 70, and at 84 he wrote his autobiography. At 86 he published a translation of The Iliad and The Odyssey just for something to do. Prior to his death, he amused himself by having his friends prepare epitaphs for him. Hobbes achieved great fame in his lifetime: "Indeed, like Bernard Shaw, by the time of his death he had become almost an English institution" (Peters, 1962, p. 16). Humans as Machines Hobbes did not become serious about philosophy until the age of 40, when he came across a copy of Euclid's Elements. This book convinced him that humans could be understood using the techniques of geometry. That is, starting with a few undeniable premises, a number of valid conclusions could be drawn. The question was what premises to begin with, and the answer came from Galileo. After visiting Galileo in 1635, Hobbes became convinced that the universe consisted only of matter and motion and that both could be understood in terms of mechanistic principles. Why, asked Hobbes, could not humans too be viewed as machines consisting of nothing but matter and motion? Galileo was able to explain the motion of physical objects in terms of the external forces acting on them—that is, without appealing to inner states or essences. Are not humans part of nature, wondered Hobbes, and if so, cannot their behavior also be explained likewise? Thomas Hobbes Images Library of Congress, Prints and Photographs Division [LC-USZ62-41264] It is interesting to note that although Hobbes was a close friend of Bacon and had himself a considerable reputation, Hobbes was never asked to join the prestigious British Royal Society (founded in 1660). The reason was that the society was dominated by Baconians, and Hobbes had issues with Bacon's inductive method. He accused the Baconians of spending too much time on gadgets and experiments and of preferring their eyes, ears, and fingertips to their brains. Instead, Hobbes chose the deductive method of Galileo and Descartes. And so with Hobbes we have the first serious attempt to apply the ideas and techniques of Galileo to the study of humans. Government and Human Instincts Like many of the philosophers we will see in this chapter, Hobbes's primary interest was actually politics. He was thoroughly convinced that the best form of government was an absolute monarchy. He believed that humans were naturally aggressive, selfish, and greedy; therefore, democracy was dangerous because it gives latitude to such negative natural tendencies. Only when people (and the church) are subservient to a monarch, he felt, could there be law and order. Without such regulation, human life would be "solitary, poor, nasty, brutish, and short" (Hobbes, 1651/1962, p. 100). Hobbes's infamous conclusion, Homo homini lupus (Man is a wolf to man), was later quoted sympathetically by Schopenhauer (see Chapter 7) and Freud (see Chapter 16). It is, according to Hobbes, fear of death that motivates humans to create social order. In other words, civilization is created as a matter of self-defense; each of us must be discouraged from committing crimes against the other. Unless controlled, humans would selfishly seek power over others so as to guarantee the satisfaction of their own personal needs: "I put for a general inclination of all mankind, a perpetual and restless desire of power after power, that ceaseth only in death" (1651/1962, p. 80). Hobbes's most famous work, Leviathan (1651), was mainly a political treatise, an attempt to explain and justify rule by an absolute monarch. Hobbes began Leviathan with his views on psychology because it was his belief that to govern effectively, a monarch needed to have an understanding of human nature. Leviathan came to be viewed as the work of an atheist, and in 1666 a motion was made in parliament to burn Hobbes as a heretic. The plague of 1665 and the great fire of London the following year were believed by many to be God's revenge on England for harboring Hobbes. King Charles II came to his rescue, however, and, as mentioned before, Hobbes went on to live a long life—to the age of 91. Hobbes's Empiricism Although Hobbes rejected Bacon's inductive method in favor of the deductive method, he did agree with Bacon on the importance of sensory experience: The [origin of all thoughts] is that which we can sense, for there is no conception in a man's mind, which hath not at first, totally, or by parts, been begotten upon the organs of sense. The rest are derived from that original. (Hobbes, 1651/1962, p. 21) Although Hobbes accepted Descartes's deductive method, he rejected his concept of innate ideas. For Hobbes, all ideas came from experience or, more specifically, from sensory experience. Following in the tradition of Democritus, Hobbes was also a materialist. Because all that exists is matter and motion, Hobbes thought it absurd to postulate a nonmaterial mind, as Descartes had done. All so-called mental phenomena could be explained by the sense experiences that result when the motion of external bodies stimulates the sense receptors, thereby causing internal motion. What others refer to as "mind," for Hobbes, was nothing more than the sum total of a person's thinking activities—that is, a series of motions within the individual. Concerning the mind-body problem, Hobbes was a physical monist; he denied the existence of a nonmaterial mind. Explanation of Psychological Phenomena Attention was explained by the fact that as long as sense organs retain the motion caused by certain external objects, they cannot respond to others. The availability of mental imagery, for Hobbes imagination, was explained by the fact that sense impressions decay over time, as did memory; "So ... imagination and memory are but one thing which for divers considerations hath divers names" (1651/1962, p. 24). Dreams then have this same origin: "The imaginations of them that sleep are those we call dreams. And these also, as all other imaginations, have been before, either totally or by parcels, in the sense" (Hobbes, 1651/1962, p. 25). The reason that dreams are typically so vivid is because during sleep there are no new sensory impressions to compete with the imagination. Hobbes argued that external objects not only produce sense impressions but also influence the vital functions of the body. Those incoming impressions that facilitate vital functions are experienced as pleasurable, and the person seeks to preserve them or to seek them out. Conversely, sense impressions incompatible with the vital functions are experienced as painful, and the person seeks to terminate or avoid them. Human behavior, then, is motivated by appetite (the seeking or maintaining of pleasurable experiences) and aversion (the avoidance or termination of painful experiences). In other words, Hobbes accepted a hedonistic theory of motivation. According to Hobbes, we use terms such as love and good to describe things that please us and terms such as hate and evil to describe things to which we have an aversion. By equating good with pleasure and evil (bad) with pain, Hobbes was taking a clear stand on moral issues: "Having insinuated this identity, Hobbes had both stated and explained moral relativism: there were no objective moral properties, but what seemed good was what pleased any individual or was good for him" (Tuck, 2002, p. 65). In Hobbes's deterministic view of human behavior, there was no place for free will. People may believe they are "choosing" because, at any given moment, one may be confronted with a number of appetites and aversions, and therefore, there may be conflicting tendencies to act. For Hobbes, will was defined as the action tendency that prevails when a number of such tendencies exist simultaneously. What appears to be choice is nothing more than a verbal label we use to describe the attractions and aversions we experience while interacting with the environment. Complex Thought Processes Hobbes also attempted to explain "trains of thought," by which he meant the tendency of one thought to follow another in some coherent manner. The question was how such a phenomenon occurs, and Hobbes's answer reintroduced the law of contiguity first proposed by Aristotle. That is, events that are experienced together are remembered together and are subsequently thought of together. All the British empiricists who followed Hobbes accepted this concept of association as their explanation as to why mental events are experienced or remembered in a particular order. To summarize Hobbes's position, we can say that he was a materialist because he believed that all that existed was physical; he was a mechanist because he believed that the universe and everything in it (including humans) were machines; he was a determinist because he believed that all activity (including human behavior) is caused by forces acting on physical objects; he was an empiricist because he believed that all knowledge was derived from sensory experience; and he was a hedonist because he believed that human behavior (as well as the behavior of nonhuman animals) was motivated by the seeking of pleasure and the avoidance of pain. Although, as we will see, not all the empiricists that followed Hobbes were as materialistic or mechanistic as he was, they all joined him in denying the existence of innate ideas.

Julien De La Mettrie

Julien de La Mettrie (1709-1751) was born on December 25 in Brittany. His father intended him to become a priest until a local doctor pointed out that a mediocre physician would be better paid than a good priest. Upon receiving his medical degree, La Mettrie soon distinguished himself in the medical community by writing articles on such topics as venereal disease, vertigo, and smallpox. He was widely resented because of professional jealousy, his tendency to satirize the medical profession, and his quick temper. In 1742 he obtained a commission as physician to a regiment serving in the war between France and Austria. During a military campaign, La Mettrie contracted a violent fever; while convalescing, he began to ponder the relationship between the mind and the body. Upon recovery from his illness, La Mettrie wrote The Natural History of the Soul (1745), which stressed that the mind is much more intimately related to the body than Descartes had assumed. If the mind is completely separate from the body and influences the body only when it chooses to do so, how can the effects of such things as wine, coffee, opium, or even a good meal on one's thoughts be explained? In fact, La Mettrie was among the first modern philosophers to suggest that "you are what you eat." Raw meat makes animals fierce, and it would have the same effect on man. This is so true that the English who eat meat red and bloody ... seem to share more or less in the savagery due to this kind of food, and to other causes which can be rendered ineffective by education only. This savagery creates in the soul, pride, hatred, scorn of other nations, indocility, and other sentiments which degrade the character, just as heavy food makes a dull and heavy mind whose usual traits are laziness and indolence. (La Mettrie, 1748/1912, p. 94) La Mettrie was not the only French thinker of the era to consider the relationship between food and psychology. For example, Jean-Anthelme Brillat-Savarin's The Physiology of Taste (1825) still remains the classic work on the topic, and his own famed axiom "Tell me what you eat, and I will tell you what you are" was a revival of Epicurean philosophy. For Brillat-Savarin, anyone who became drunk, or overindulged, really didn't understand the pleasures of fine dining. To La Mettrie, it was clear that whatever influences the body influences the so-called thought processes, but La Mettrie went further. He believed that there is nothing in the universe but matter and motion. Sensations and thoughts are also nothing but movements of particles in the brain. Thus, La Mettrie, like Hobbes and Gassendi, was a thoroughgoing materialist. La Mettrie's book The Natural History of the Soul (1745) was harshly criticized by the French clergy. The feelings against him were so intense that he was forced into exile in Holland. While in Holland, he wrote his most famous book, L'Homme Machine (Man a Machine, 1748). This book so upset the Dutch clergy that La Mettrie was also forced to leave Holland. Fortunately, Frederick the Great offered La Mettrie a pension and refuge in Berlin. There, La Mettrie continued writing on medical topics until his death at the age of just 41. Man a Machine La Mettrie was among those who believed that Descartes was a mechanist, even as far as humans were concerned, and that his published thoughts on God and the soul were designed to obscure his true feelings from the clergy and to save himself from persecution (La Mettrie, 1748/1912, p. 143). In any case, La Mettrie felt that if Descartes had followed his own method, he (Descartes) would have reached the conclusion that humans, like nonhuman animals, were automata. La Mettrie, then, set out to either correct Descartes's misunderstanding of humans or to do what Descartes wanted to do but refrained from doing because of the fear of the church. La Mettrie concluded Man a Machine with the statement, "Let us then conclude boldly that man is a machine, and that in the whole universe there is but a single substance differently modified" (1748/1912, p. 148). The single substance, of course, was matter, and this belief that every existing thing, including humans, consists of matter and nothing else makes La Mettrie a physical monist. For La Mettrie, to believe in the existence of an immaterial soul (mind) was just plain silly. According to La Mettrie, only a philosopher who was not at the same time a physician could postulate the existence of an immaterial soul that is independent from the body. The overwhelming evidence for the dependence of so-called mental events on bodily states available to physicians would (or should) preclude them from embracing dualism. Human and Nonhuman Animals La Mettrie (1748/1912) equated intelligence and some personality characteristics with the size and quality of the brain: I shall draw the conclusions which follow clearly from ... incontestable observations: 1st, that the fiercer animals are, the less brain they have; 2nd, that this organ seems to increase in size in proportion to the gentleness of the animal; 3rd, that nature seems here eternally to impose a singular condition, that the more one gains in intelligence the more one loses in instinct. (pp. 98-99) If humans can be considered superior to nonhuman animals, it is because of education and the development of language. Because the primate brain is almost as large and as complex as ours, it follows that if primates could be taught language, they would resemble humans in almost all respects. The question is, can primates learn a language? Among animals, some learn to speak and sing; they remember tunes, and strike the notes as exactly as a musician. Others, for instance the ape, show more intelligence, and yet can not learn music. What is the reason for this, except some defect in the organs of speech? In a word, would it be absolutely impossible to teach the ape a language? I do not think so. (La Mettrie, 1748/1912, p. 100) With proper training, humans and apes could be made remarkably similar. Such is the likeness of the structure and functions of the ape to ours that I have very little doubt that if this animal were properly trained he might at last be taught to pronounce, and consequently to know, a language. Then he would no longer be a wild man, nor a defective man, but he would be a perfect man, a little gentleman, with as much matter or muscle as we have, for thinking and profiting by his education. (La Mettrie, 1748/1912, p. 103) According to La Mettrie, intelligence was influenced by three factors: brain size, brain complexity, and education. Humans are typically superior in intelligence to other animals because we have bigger, more complex brains and because we are better educated. However, by education, La Mettrie did not mean only explicit instruction but also the effects of everyday experience—for example, our interactions with other people. In any case, humans differ from nonhuman animals only in degree, not in type: "Man is not molded from a costlier clay; nature has used but one dough, and has merely varied the leaven" (La Mettrie, 1748/1912, p. 117). And this observation was made over 100 years before Darwin published The Origin of Species (1859). According to La Mettrie, belief in the uniqueness of humans (dualism) and in God are not only incorrect but also responsible for widespread misery. Humans would be much better served by accepting their continuity with the animal world. That is, we should accept the fact that, like other animals, humans are machines—complex machines, but machines nonetheless. La Mettrie (1748/1912) described how life would be for the person accepting the materialistic-mechanistic philosophy: He who so thinks will be wise, just, tranquil about his fate, and therefore happy. He will await death without either fear or desire, and will cherish life (hardly understanding how disgust can corrupt a heart in this place of many delights); he will be filled with reverence, gratitude, affection, and tenderness for nature, in proportion to his feeling of the benefits he has received from nature; he will be happy, in short, in feeling nature, and in being present at the enchanting spectacle of the universe, and he will surely never destroy nature either in himself or in others. More than that! Full of humanity, this man will love human character even in his enemies. Judge how he will treat others. He will pity the wicked without hating them; in his eyes, they will be but mis-made men. But in pardoning the faults of the structure of mind and body, he will none the less admire the beauties and the virtues of both. ... In short, the materialist, convinced, in spite of the protests of his vanity, that he is but a machine or an animal, will not maltreat his kind, for he will know too well the nature of those actions ... and following the natural law given to all animals, he will not wish to do to others what he would not wish them to do to him. (pp. 147-148) Julien de La Mettrie Images Courtesy of the National Library of Medicine La Mettrie dared to discuss openly those ideas that were held privately by many philosophers of the time. In doing so, he offended many powerful individuals. Although it is clear that he influenced many subsequent thinkers, his works were rarely cited or his name even mentioned. The fact that he died of "indigestion" following an overindulgence of pheasant and truffles was seen by many as a most fitting death.

Ewald Hering

Helmholtz, with his notion of unconscious inference, generally sided with those who said perceptions were learned. Ewald Hering (1834-1918) sided with the nativists. After receiving his medical degree from the University of Leipzig, Hering stayed there for several years before accepting a post as lecturer at the Vienna Military Medical Academy, where he worked with Josef Breuer (1842-1925), who was later to be instrumental in the founding of psychoanalysis (see Chapter 16). Working together, Hering and Breuer showed that respiration was, in part, caused by receptors in the lungs—a finding called the Hering-Breuer reflex. In 1870 Hering was called to the University of Prague, where he succeeded the great physiologist Jan E. Purkinje (1787-1869). Like Goethe, to whom Purkinje dedicated one of his major works, Purkinje was a phenomenologist. He believed that the phenomena of the mind, arrived at by careful introspective analysis, should be what physiologists attempt to explain. According to Purkinje, the physiologist is obliged to explain not only "normal" sensations and perceptions but "abnormal" ones as well, such as illusions and afterimages. Among the many phenomena that Purkinje observed was that the relative vividness of colors is different in faint light than it is in bright light. More specifically, as twilight approaches, hues that correspond to short wavelengths such as violet and blue appear brighter than hues corresponding to longer wavelengths such as yellow and red. This change in relative vividness, as a function of luminance level, is known as the Purkinje shift. Hering also was a phenomenologist, and his theory of color vision was based to a large extent on the phenomenon of negative afterimages.

French Sensationalism: Man as Machine

French philosophers also aspired to be Newtonians of the mind, and they had much in common with their British counterparts. The goal for both the French and British was to explain the mind as Newton had explained the physical world—that is, in a way that stressed the mind's mechanical nature, that reduced mental activity to its basic elements, that used only a few basic principles, and that minimized or eliminated metaphysical speculation. All the French and British philosophers considered in this chapter had these goals in common. We refer to the French philosophers as sensationalists because some of them intentionally stressed the importance of sensations in explaining all conscious experience and because the label provides a convenient way of distinguishing between the British and the French. In general, however, all these philosophers were more similar than they were different and strongly opposed the rationalism of Descartes, especially his beliefs in innate ideas. All ideas, said both the British empiricists and the French sensationalists, came from experience, and most, if not all, mental activity could be explained by the laws of association acting on those ideas. The question asked by both the British empiricists and the French sensationalists was, if everything else in the universe can be explained in terms of mechanical laws, why should not humans, too, obey those laws? Although the metaphor of human beings as machines was suggested by the work of Copernicus, Kepler, Galileo, and Newton, it was best articulated by Descartes. Descartes's dualistic conception of humans meant that our bodies act according to mechanical principles (our bodies are machines) but our minds do not. Without the autonomous mind that Descartes had postulated, however, humans were equated with mechanical automata or nonhuman animals, that is, as biological machines. It was this metaphor of humans as machines that especially appealed to the French sensationalists. In fact, many believed that Descartes himself saw the possibility of viewing humans as machines but that he avoided revealing this belief because of what happened to Galileo and a number of other natural philosophers (scientists) of his time. There was still reason to fear the church in France in the mid-18th century, but the French sensationalists pursued their metaphor of man as a machine with courage and boldness.

Hermann von Helmholtz

Many consider Hermann von Helmholtz (1821-1894) to be the premier scientist of the 19th century. As we will see, he made significant contributions in physics, physiology, and psychology. Helmholtz, born in Potsdam, Germany, was a frail child and a mediocre student at foreign languages and poetry. This apparent mediocrity may have owed to his teachers, because he spent his spare time reading scientific books and working out the geometric principles that described the various configurations of his play blocks. His father was a teacher who did not have enough money to pay for the scientific training that his son desired. Fortunately, the government had a program under which talented students could go to medical school free if they agreed to serve for eight years as army surgeons following graduation. Helmholtz took advantage of this program and enrolled in the Berlin Royal Friedrich-Wilhelm Institute for Medicine and Surgery when he was 17 years old. While in his second year of medical school, he began his studies with Johannes Müller.

rate of nerve conducition

Müller maintained that nerve conduction was almost instantaneous, making it too fast to measure. This view reflected his belief that there was a vital, nonmaterial, agent that moved instantaneously and determined the behavior of living organisms. Those believing in such a vital force never considered measuring the speed of nerve conduction. Helmholtz, however, excluded nothing from the realm of science, not even the rate of nerve conduction. To measure the rate, Helmholtz isolated the nerve fiber leading to a frog's leg muscle. He then stimulated the nerve fiber at various distances from the muscle and noted how long it took the muscle to respond. He found that the muscular response followed more quickly when the motor nerve was stimulated closer to the muscle than when it was stimulated farther away from the muscle. By subtracting one reaction time from the other, he concluded that the nerve impulse travels at a rate of about 90 feet per second (27.4 meters per second). Helmholtz then turned to humans, asking his subjects to respond by pushing a button when they felt their leg being stimulated. He found that reaction time was slower when the toe was stimulated than when the thigh was stimulated; he concluded, again by subtraction, that the rate of nerve conduction in humans was between 165 and 330 feet per second (50.3-100.6 meters per second). This aspect of Helmholtz's research was significant because it showed that nerve impulses are indeed measurable—and, in fact, they are relatively slow. This was taken as further evidence that physical-chemical processes are involved in our interactions with the environment instead of some mysterious process that was immune to scientific scrutiny. Although the measure of reaction time was extremely useful to Helmholtz in measuring the speed of nerve conduction, he found that it varied considerably among subjects and even for the same subject at different times. He concluded that reaction time was too unreliable to be used as a valid measure and abandoned it. Support for his doubts came later when more precise measurements made by Du Bois-Reymond indicated that the nerve conduction speeds he had reported were too slow. But this does not detract from the importance of Helmholtz's pioneering research.

phrenology

Not long after Reid and company (see Chapter 6) had listed what they thought were the faculties of the mind, others were to take faculty psychology in to the realm of physiology. One was Franz Joseph Gall (1758-1828). Gall accepted the widely held belief that faculties of the mind acted on and transformed sensory information, but he made three additional claims that changed the history of faculty psychology: The mental faculties do not exist to the same extent in all humans. The faculties are housed in specific areas of the brain. If a faculty is well developed, a person would have a bump or protrusion on the corresponding part of the skull. Similarly, if a faculty is underdeveloped, a hollow or depression would be on the corresponding part of the skull. Franz Joseph Gall Images Courtesy of the National Library of Medicine Thus, Gall believed that the magnitude of one's faculties could be determined by examining the bumps and depressions on one's skull. Such an analysis was called phrenology. Gall's idea was not necessarily a bad one. In fact, Gall was among the first to attempt to relate certain personality traits and overt behavior patterns to specific brain functions. The problem was the type of evidence he accepted to demonstrate this relationship. He would observe that someone had a pronounced personality characteristic and a well-developed brain structure and then attribute one to the other. After observing such a relationship in one individual, he would generalize it to all individuals. In their research on the mental faculties, some of Gall's followers exceeded even his shoddiness: If Gall was cavalier in his interpretations of evidence, he attracted some followers who raised that tendency to an art form. When a cast of Napoleon's right skull predicted qualities markedly at variance with the emperor's known personality, one phrenologist replied that his dominant side had been the left—a cast of which was conveniently missing. When Descartes's skull was examined and found deficient in the regions for reason and reflection, phrenologists retorted that the philosopher's rationality had always been overrated. (Fancher, 1990, p. 79) Although Gall is sometimes reviewed negatively in the history of psychology, he made several positive contributions to the study of brain functioning. For example, he studied the brains of several animal species, including humans, and was the first to suggest a relationship between cortical development and mental functioning. He found that larger, better-developed cortices were associated with more intelligent behavior. In addition, he was the first to distinguish the functions of gray matter and white matter in the brain. These discoveries alone qualify Gall for recognition in the history of psychology, but there is more. As the 19th century began, the idea that different cortical regions are associated with different functions was becoming popular. This, in large part, was due to Gall: "In the minds of most historians, Gall, more than any other scientist, put the concept of cortical localization into play" (Finger, 1994, p. 32). The Popularity of Phrenology The term phrenology was actually coined by Thomas Foster in 1815 (Bakan, 1966). Gall disliked the term (he preferred physiognomy), but it was accepted and made popular by his student and colleague Johann Kaspar Spurzheim (1776-1832). The dissemination of phrenology into English-speaking countries was facilitated by Spurzheim's The Physiognomical System of Drs. Gall and Spurzheim (1815) and by the translation of Gall's On the Functions of the Brain and Each of Its Parts: With Observations on the Possibility of Determining the Instincts, Propensities, and Talents, or the Moral and Intellectual Dispositions of Men and Animals, by the Configuration of the Brain and Head (1835). Phrenology became enormously popular and was embraced by some of the leading intellectuals in Europe (such as Bain and Comte). One reason for the popularity of phrenology was Gall's considerable reputation. Another was that phrenology provided hope for an objective, materialistic analysis of the mind: "The central theme that runs through all of the phrenological writings is that man himself could be studied scientifically, and in particular that the phenomena of mind could be studied objectively and explained in terms of natural causes" (Bakan, 1966, p. 208). Phrenology was also popular because, unlike mental philosophy, it appeared to offer practical information. For these reasons phrenology was also embraced enthusiastically in the United States. For example, the Central Phrenological Society was founded in Philadelphia in 1822 by Charles Caldwell. In 1824 Caldwell published Elements of Phrenology, the first American textbook on phrenology. In 1827 a second edition of Elements was published. Because of the popularity of phrenology, when Spurzheim arrived in the United States on August 4, 1832, he was given a hero's welcome. He lectured at some of the nation's leading universities, such as Harvard and Yale, and his appreciative audiences included physicians, ministers, public educators, college professors, and asylum superintendents. O'Donnell (1985) points out that these and other individuals were looking to phrenology for the type of information that some would later seek in the school of behaviorism (see Chapter 12): With or without bumps, phrenology's theory of human nature and personality recommended itself to emerging professional groups searching for "positive knowledge." ... [They] found in phrenology an etiological explanation of aberrant human behavior; a predictive technology for assessing character, temperament, and intellect; and a biological blueprint for social reform. The social engineers of the twentieth century, together with their patrons and subscribers, would demand no less of modern experimental behaviorism. When the new psychology [behaviorism] arrived on the American stage an eager audience anticipated the role it was to play. Gall, Spurzheim ... and their followers had already written the script. (p. 78) Spurzheim died shortly after he came to the United States, and on the day of his funeral (November 17, 1832), the Boston Phrenological Society was formed. Such societies soon sprang up all over the nation, and numerous journals devoted to phrenology emerged in Europe and the United States. One, Phrenological Journal, started publishing in 1837 and continued until 1911. In New York, the Fowler brothers and then their extended family ran the Institute of Phrenology from the 1830s until 1912. They published popular texts and provided services akin to those offered by modern industrial/organizational (and counseling) psychologists (see Risse, 1976). A number of "phrenology charts" began to appear after the publication of Gall's and Spurzheim's books. Proposed numbers of faculties ranged from 27 (suggested by Gall) to as many as 43 suggested by later phrenologists. Figure 8.1 shows the chart Spurzheim proposed. Figure 8.1 The phrenology chart suggested by Spurzheim (1834) showing the "powers and organs of the mind." An illustration shows numbered structures of brain in three different views: frontal, side and rear. It is separated into Affective and Intellectual Faculties. Affective faculties divided into propensities and sentiments. Propensities are as follows: ? Desire to live; . Alimentiveness;1 Destructiveness;2 Amativeness;3 Philoprogenitiveness;4 Adhesiveness;5 Inhabitiveness;6 Combativeness;7 Secretiveness;8 Acquisitiveness;9 Constructiveness. Sentiments are as follows: 10 Cautiousness;11 Approbativeness;12 Self-esteem;13 Benevolence;14 Reverence;15 Firmness;16 Conscientiousness;17 Hope;18 Marvelousness;19 Ideality;20 Mirthfulness;21 Imitation. Intellectual faculties divided into perceptive and reflective. Perceptive are as follows: 22 Individuality;23 Configuration;24 Size;25 Weight and Resistance;26 Coloring;27 Locality;28 Order;29 Calculation;30 Eventuality;31 Time;32 Tune;33 Language. Reflective are as follows: 34 Comparison;35 Causality. Formal Discipline Phrenology also became highly influential in the realm of education. Several phrenologists made the additional claim that the faculties become stronger with practice, just as muscles do. This belief influenced a number of educators to take a "mental muscle" approach to education. For them education meant strengthening mental faculties by practicing the traits associated with them. One could improve one's reasoning ability, for example, by studying mathematics. The belief that educational experiences could be arranged so that they strengthen certain faculties was called formal discipline. Although Edward L. Thorndike systematically evaluated the educational claims of the phrenologists and found them to be false (see Chapter 11), the belief that educational experiences can be arranged to strengthen specific mental faculties persists to the present. In time, the specific claims of the phrenologists were rejected, but phrenology did influence subsequent psychology in a number of important ways: It argued effectively that the mind and brain are closely related; it stimulated intense research on the localization of brain functions; and it showed the importance of furnishing practical information. In other forms physiognomy itself endured well into the 20th century. William H. Sheldon (1898-1977) was the godson of William James, and following the completion of both his MD and PhD degrees, he studied with Jung and Freud. Later, at Harvard, he became famous for correlating personality with body form. Although eventually reinterpreted by his critics (for example, Eysenck, 1959), Sheldon was able to find significant personality differences between thin and angular ectomorphs, lean and muscular mesomorphs, and soft and round mesomorphs.

Positivism

The British empiricists and the French sensationalists had in common the belief that all knowledge comes from experience; that is, that there are no innate ideas. All knowledge, they said, even moral knowledge, was derived from experience. If the denial of innate moral principles did not place the empiricists and the sensationalists in direct opposition to religion, it certainly placed them in direct opposition to religious dogma. As the successes of the physical and mental sciences spread throughout Europe, and as religious doctrine became increasingly suspect, a new belief emerged—the belief that science, not religion, was best suited to solve all human problems. Such a belief is called scientism. To those embracing scientism, scientific knowledge is the only valid knowledge; therefore, it provides the only information one can believe. For these individuals, science itself takes on some of the characteristics of a religion. One such individual was Auguste Comte.

Electrophysiology: Fritsch and Hitzig

The 18th century has often been called the Age of Electricity, and scientist's fascination with electricity soon extended into physiology. In the late 1700s Luigi Galvani (1737-1798) demonstrated that application of an electrical current caused a frog's leg to twitch. Emil Du Bois-Reymond, who we have mentioned several times previously, was considered the "father of electrophysiology" in part for demonstrating the electrical basis of the action potential in nerves and muscles. Electrically stimulating the exposed cortex of a dog, Gustav Fritsch (1838-1927) and Eduard Hitzig (1838-1907) made two important discoveries. First, the cortex is not insensitive, as had been previously assumed. Second, they found that when a certain area of the cortex is stimulated, muscular movements are elicited from the opposite side of the body. Stimulating different points in this motor area of the brain stimulated movements from different parts of the body. Thus, another function was localized on the cortex. David Ferrier David Ferrier (1843-1928) refined the cortical research performed by Fritsch and Hitzig. Using monkeys as subjects and finer electrical stimulation, he was able to produce a more articulated map of the motor cortex. He was able to elicit behaviors "as intricate as the twitch of an eyelid, the flick of an ear, and the movement of one digit" (Finger, 1994, p. 40). Ferrier then mapped cortical regions corresponding to the cutaneous senses, audition, olfaction, and, eventually, vision. He summarized his findings in The Functions of the Brain (1876), which had a substantial impact on the scientific community: "One outcome was that it opened the 'modern' era of neurosurgery. Neurosurgeons now turned to 'functional maps' of the brain for guidance" (Finger, 1994, p. 41). The evidence seemed clear; there is a great deal of localization of function on the cortex, just as the phrenologists had maintained. These findings, however, did not support traditional phrenology. Seldom was a function (faculty) found where the phrenologists had said it was. Furthermore, the phrenologists had spoken of faculties such as vitality, firmness, love, and kindness, but the researchers instead found sensory and motor areas. These findings, however, did extend the Bell-Magendie law to the brain. That is, the sensation experienced seemed to be more a matter of the cortical area stimulated than a matter of the sensory nerve stimulated. It looked very much as if the brain is a complex switchboard where sensory information is projected and where it in turn stimulates appropriate motor responses. The brain research that was stimulated in an effort to evaluate the claims of the phrenologists made it clear that physical stimulation gives rise to various types of subjective experiences and that they are directly related to brain activity. The next step in psychology's development toward becoming an experimental science was to examine scientifically how sensory stimulation is systematically related to conscious experience.

consciousness, sensations, and reality

The most significant implication of Müller's doctrine for psychology was that the nature of the central nervous system, not the nature of the physical stimulus, determines our sensations. Müller's findings underscored that we are never conscious of objects in the physical world but of various sensory impulses in the brain linked to those real objects. It follows then that our knowledge of the physical world must be limited to the types of sense receptors we possess. An ardent Kantian, Müller believed that he had found the physiological equivalent of Kant's categories of thought. According to Kant, sensory information is transformed by the innate categories of thought before it is experienced consciously. For Müller, the nervous system is the intermediary between physical objects and consciousness. Kant's nativism stressed mental categories, whereas Müller's stressed physiological mechanisms. In both cases, sensory information is modified, and therefore, what we experience consciously is different from what is physically present. For Müller, however, sensations did not exhaust mental life. In his famous Handbuch der Physiologie der Menschen (Handbook of Human Physiology, 1833-1840), in a section titled "Of the Mind," he postulated a mind capable of attending to some sensations to the exclusion of others. Thus, even in his otherwise mechanistic system, Müller found room for an active mind, again in allegiance to Kant. Johannes Müller Images Bettmann/Getty Images Müller was one of the greatest experimental physiologists ever. His Handbuch summarized what was known about human physiology at the time. Müller also established the world's first Institute for Experimental Physiology at the University of Berlin. In addition, Müller understood the close relationship between physiology and psychology. He said, "Nobody can be a psychologist, unless he first becomes a physiologist" (Fitzek, 1997, p. 46).

christine ladd-franklin

Throughout Western history, neither philosophy nor science has been common vocations for women. In both ancient Egypt and Greece, there were some famous women physicians, but we know little beyond their names. Educational opportunities were rare for mostly everyone in medieval Europe, but especially for women. The extensive writings by Hildegard of Bingen (died 1179) is often cited as one notable exception. With the advent of printing in the Renaissance, education and literacy again was on the rise—even for women. By the 1700s and 1800s, a few women were gaining modest recognition in philosophy and science. For example, the Italian Laura Bassi (1711-1778) is thought to have been the first female university professor. As psychology finally found its place among the academic sciences, so too did women. Christine Ladd graduated from the then new Vassar College in 1869. She pursued her interest in mathematics at the also new Johns Hopkins University, and although she completed all the requirements for a doctorate in 1882, the degree was not granted because she was a woman. She was, however, given an honorary degree by Vassar in 1887. When the social climate became less discriminating against women, she was granted her doctorate from Johns Hopkins in 1926, 44 years after she had completed her graduate work (she was nearly 80 years old at the time). In 1882 Christine Ladd married Fabian Franklin, a mathematics professor at Johns Hopkins. During her husband's sabbatical leave in Germany, Christine Ladd-Franklin (1847-1930) was able to pursue an interest in psychology she had developed earlier (she had published a paper on vision in 1887). Although, at the time, women were generally excluded from German universities, she managed to be accepted for a year (1891-1892) in Georg E. Müller's laboratory at Göttingen, where Hering's theory of color vision was supported. After her year under Müller's influence, she studied with Helmholtz at the University of Berlin, where she learned about his trichromatic theory of color vision. Christine Ladd-Franklin Images © Archives of the History of American Psychology, the Center for the History of Psychology, the University of Akron. Before leaving Europe, Ladd-Franklin was ready to announce her own theory of color vision, which she believed improved upon those of Helmholtz and Hering. She presented her theory at the International Congress of Experimental Psychology in London in 1892. Upon returning to the United States, Ladd-Franklin lectured on logic and psychology at Johns Hopkins until she and her husband moved to New York, where she lectured and promoted her theory of color vision at Columbia University from 1910 until her death in 1930. Ladd-Franklin's theory of color vision was based on evolutionary theory. She noted that some animals are color blind and assumed that achromatic vision appeared first in evolution and color vision came later. She assumed further that the human eye carries vestiges of its earlier evolutionary development. She observed that the most highly evolved part of the eye is the fovea, where, at least in daylight, visual acuity and color sensitivity are greatest. Moving from the fovea to the periphery of the retina, acuity is reduced and the ability to distinguish colors is lost. However, in the periphery of the retina, night vision and movement perception are better than in the fovea. Ladd-Franklin assumed that peripheral vision (provided by the rods of the retina) was more primitive than foveal vision (provided by the cones of the retina) because night vision and movement detection are crucial for survival. But if color vision evolved later than achromatic vision, was it not possible that color vision itself evolved in progressive stages? After carefully studying the established color zones on the retina and the facts of color blindness, Ladd-Franklin concluded that color vision evolved in three stages. Achromatic vision came first, then blue-yellow sensitivity, and finally red-green sensitivity. The assumption that the last to evolve would be the most fragile explains the prevalence of redgreen color blindness. Blue-yellow color blindness is less frequent because it evolved earlier and is less likely to be defective. Achromatic vision is the oldest and, therefore, the most difficult to disrupt. Ladd-Franklin, of course, was aware of Helmholtz's and Hering's theories, and, although she preferred Hering's theory, her view was not offered in opposition to either. Rather, she attempted to explain in evolutionary terms the origins of the anatomy of the eye and its visual abilities. After initial popularity, Ladd-Franklin's theory fell into neglect, perhaps because she did not have adequate research facilities available to her. Some believe, however, that her analysis of color vision still has validity (see, for example, Hurvich, 1971). For interesting biographical sketches of Ladd-Franklin, see Furumoto (1992) and Scarborough and Furumoto (1987).

Helmholtz's Stand against Vitalism

Although Helmholtz accepted many of Müller's conclusions, the two men still had basic disagreements, one of them over Müller's belief in vitalism. In biology and physiology, the vitalism-materialism problem was much like the mind-body problem in philosophy. The vitalists maintained that life could not be explained by the interactions of physical and chemical processes alone. For the vitalists, life was more than a physical process and could not be reduced to such a process. Furthermore, because it was not physical, the "life force" was forever beyond the scope of scientific analysis. Müller was a vitalist. Conversely, the materialists saw nothing mysterious about life and assumed that it could be explained in terms of physical and chemical processes. Therefore, there was no reason to exclude the study of life or of anything else from the realm of science. Helmholtz sided with the materialists, who believed that the same laws apply to living and nonliving things, as well as to mental and nonmental events. So strongly did Helmholtz and several of his fellow students believe in materialism that they signed the following oath (some say in their own blood): No other forces than the common physical-chemical ones are active within the organism. In those cases which cannot at the time be explained by these forces one has either to find the specific way or form of their action by means of the physical mathematical method, or to assume new forces equal in dignity to the physical-chemical forces inherent in matter, reducible to the force of attraction and repulsion. (Bernfeld, 1949, p. 171) In addition to Helmholtz, others who signed the oath were Du Bois-Reymond (who became the professor of physiology at the University of Berlin when Müller died), Karl Ludwig (who became a professor of physiology at the University of Leipzig, where he influenced a young Ivan Pavlov), and Ernst Brucke (who became a professor of physiology at the University of Vienna, where he taught and befriended Sigmund Freud). What this group accepted when they rejected vitalism were the beliefs that living organisms, including humans, were complex machines (mechanism) and that these machines consist of nothing but material substances (materialism). The mechanistic-materialistic philosophy embraced by these individuals profoundly influenced physiology, medicine, and psychology. Principle of Conservation of Energy Helmholtz obtained his medical degree at the age of 21 and was inducted into the army. While in the army, he was able to build a small laboratory and to continue his early research, which concerned metabolic processes in the frog. Helmholtz demonstrated that food and oxygen consumption were able to account for the total energy that an organism expended. He was thus able to apply the already popular principle of conservation of energy to living organisms. According to this principle, which previously had been applied to physical phenomena, energy is never created or lost in a system but is only transformed from one form to another. When applied to living organisms, the principle was clearly in accordance with the materialist philosophy because it brought physics, chemistry, and physiology closer together. In 1847 Helmholtz published a paper titled "The Conservation of Force," and it was so influential that he was released from the remainder of his tour of duty in the army. Hermann von Helmholtz Images Courtesy of the National Library of Medicine In 1848 Helmholtz was appointed lecturer of anatomy at the Academy of Arts in Berlin. The following year, he was appointed professor of physiology at Konigsberg, where Kant had spent his entire academic life. It was at Konigsberg that Helmholtz conducted his now famous research on the speed of nerve conduction.

British Empiricism

An empiricist is anyone who believes that knowledge is derived from experience. Empiricism, then, is a philosophy that stresses the importance of experience in the attainment of knowledge. The term experience, in the definition of empiricism, complicates matters because there are many types of experience. There are "inner" experiences such as dreams, imaginings, fantasies, and a variety of emotions. Also, when one thinks logically, such as during mathematical deduction, one is having mental experiences. There is general agreement, however, to exclude such inner experiences from a definition of empiricism and refer exclusively to sensory experience. Yet, even after focusing on sensory experience, there is still a problem because the implication is that any philosopher who claims sensory experience to be vital in attaining knowledge is an empiricist. If this were true, Descartes could be called an empiricist because, for him, many ideas came from sensory experience. Thus, acknowledging the importance of sensory experience alone does not qualify one as an empiricist. What then is an empiricist? In this text, we will use the following definition of empiricism: Empiricism ... is the epistemology that asserts that the evidence of sense constitutes the primary data of all knowledge; that knowledge cannot exist unless this evidence has first been gathered; and that all subsequent intellectual processes must use this evidence and only this evidence in framing valid propositions about the real world. (D. N. Robinson, 1986, p. 205) It is important to highlight a number of terms in Robinson's definition. First, this definition asserts that sensory experience constitutes the primary data of all knowledge; it does not say that such experience alone constitutes knowledge. Second, it asserts that knowledge cannot exist until sensory evidence has first been gathered; so for the empiricist, attaining knowledge begins with sensory experience. Third, all subsequent intellectual processes must focus on sensory experience in formulating propositions about the world. Thus, it is not the recognition of mental processes that distinguishes the empiricist from the rationalist; rather, it is what those thought processes are focused on. Again, most epistemological approaches use sensory experience as part of their explanation of the origins of knowledge; for the empiricist, however, sensory experience is of supreme importance.

doctrine of specific nerve energies

Born in Koblenz, Germany, the famed physiologist Johannes Müller (1801-1858) expanded the Bell-Magendie law by devising the doctrine of specific nerve energies. After receiving his doctorate from the University of Bonn in 1822, Müller remained there as professor until 1833, when he accepted the newly created chair of physiology at the University of Berlin. Following Bell's suggestion, Müller demonstrated that there are different types of sensory nerves, each containing a characteristic energy, and that when they are stimulated, a characteristic sensation results. In other words, each nerve responds in its own way no matter how it is stimulated. For example, stimulating the eye with light waves, electricity, pressure, or by a blow to the head will all cause visual sensations. Emil Du Bois-Reymond (1818-1896), one of Müller's students, went so far as to say that if we could cut and cross the visual and auditory nerves, we would hear with our eyes and see with our ears (Boring, 1950, p. 93).

pierre flourens

By the turn of the 19th century, it was generally conceded that the brain is the organ of the mind. Under the influence of Gall and the other phrenologists, the brain-mind relationship was articulated into a number of faculties housed in specific locations in the brain. Thus, the phrenologists fueled the concern of localization of functions in the brain. Although popular (even among scientists), phrenology was far from universally accepted. A number of prominent physicians questioned the claims of the phrenologists. It was not enough, however, to claim that the phrenologists were wrong in their assumptions; the claim had to be substantiated scientifically. This was the goal of Pierre Flourens (1794-1867), who used the method of extirpation, or ablation, in brain research. Ablation involves destroying part of the brain and then noting the behavioral consequences of the loss. As did Gall, Flourens assumed that the brains of lower animals were similar in many ways to human brains, so he used organisms such as dogs and pigeons as his research subjects. He found that removal of the cerebellum disturbed an organism's coordination and equilibrium, that ablation of the cerebrum resulted in passivity, and that destruction of the semicircular canals resulted in loss of balance. Pierre Flourens Images Courtesy of the National Library of Medicine When he examined the entire brain, Flourens concluded that there is some localization, but that contrary to what the phrenologists believed, the cortical hemispheres function as a unit. Seeking further evidence of the brain's interrelatedness, Flourens observed that animals sometimes regained functions that they had lost following ablation. Thus, at least some parts of the brain had the capacity to take over the function for other parts. Flourens's fame as a scientist, and his conclusion that the cortex functioned as a unit, effectively silenced the phrenologists within the medical community. Subsequent research, however, would show that they had been silenced too quickly.

A Second Type of Positivism

Comte insisted that we accept only that of which we can be certain, and for him, that was publicly observable data. Another brand of positivism emerged later, however, under the leadership of the physicist Ernst Mach (1838-1916). Mach, like Comte, insisted that science concentrate only on what could be known with certainty. Neither Comte nor Mach allowed metaphysical speculation in their views of science. The two men differed radically, however, in what they thought scientists could be certain about. For Comte, it was physical events that could be experienced by any interested observer. Mach, however, agreed with the contention of Berkeley and Hume—that we can never experience the physical world directly. We experience only sensations or mental phenomena. Ernst Mach Images Courtesy of the National Library of Medicine For Mach, the job of the scientist was to note which sensations typically cluster together and to describe in precise mathematical terms the relationships among them. According to Mach, "There can be no a priori knowledge of the world, only experiences that, when systematically organized, can lay claim to the status of scientific knowledge" (D. N. Robinson, 2000, p. 1020). In agreement with Hume, Mach concluded that so-called cause-and-effect relationships are nothing more than functional relationships among mental phenomena. Although for Mach the ultimate subject matter of any science was necessarily cognitive, this fact need not prevent scientists from doing their work objectively and without engaging in metaphysical speculation. In his influential book The Science of Mechanics (1883/1960), Mach insisted that scientific concepts be defined in terms of the procedures used to measure them rather than in terms of their "ultimate reality" or "essence." Thus, both Comte and Mach were positivistic, but what they were positive about differed. Mach went beyond Comte's assertion of the primacy of science and wrote about the methods that should govern the work of scientists. In doing so, Mach anticipated Bridgman's concept of the operational definition (see Chapter 13) and provided a template for the proper conduct of science. Indeed, Albert Einstein often referred to Mach as one of the most important influences on his life and work. Following Mach, positivism was revised through the years as the prevailing philosophy of science itself and was eventually transformed into logical positivism. It was through logical positivism that positivistic philosophy had its greatest impact on psychology. We will discuss logical positivism and its relationship to psychology in Chapter 13.

theory of auditory perception

For audition, as he had done for color vision, Helmholtz further refined Müller's doctrine of specific nerve energies. He found that the ear is not a single sense receptor but a highly complex system of many receptors. Whereas the visual system consists of three types of nerve fibers, each with its own specific nerve energy, the auditory system contains thousands of types of nerve fibers, each with its own specific nerve energy. Helmholtz found that when the main membrane of the inner ear, the basilar membrane, was removed and uncoiled, it was shaped much like a harp. Assuming that this membrane is to hearing what the retina is to seeing, Helmholtz speculated that the different fibers along the basilar membrane are sensitive to differences in the frequency of sound waves. The short fibers respond to the higher frequencies, the longer fibers to the lower frequencies. A wave of a certain frequency causes the appropriate fiber of the basilar membrane to vibrate, thus causing the sensation of sound corresponding to that frequency. This process was called sympathetic vibration, and it can be demonstrated by stimulating a tuning fork of a certain frequency and noting that the string on a piano corresponding to that frequency also begins to vibrate. Helmholtz assumed that a similar process occurs in the middle ear and that, through various combinations of fiber stimulation, one could explain the wide variety of auditory experiences we have. This theory is referred to as the resonance place theory of auditory perception. Variations of Helmholtz's place theory persist today.

Gustav theodor Fechter

Gustav Theodor Fechner (1801-1887) was a brilliant, complex, and unusual individual. Fechner's father had succeeded his grandfather as village pastor and created a stir when he placed a lightning rod atop his church. After his father died, Fechner, his brother, and his mother spent the next nine years with Fechner's uncle, who was also a pastor. At the age of 16, Fechner began his studies in medicine at the University of Leipzig (where Weber was) and obtained his medical degree in 1822 at the age of 21. Upon receiving his medical degree, Fechner's interest shifted from biological science to physics and mathematics. At this time, he made a meager living by translating into German certain French handbooks of physics and chemistry, by tutoring, and by lecturing occasionally. Fechner was interested in the properties of electric currents and in 1831 published a significant article on the topic, which established his reputation as a physicist. In 1834, when he was 33 years old, Fechner was appointed professor of physics at Leipzig. Soon his interests began to turn to the problems of sensation, and by 1840 he had published articles on color vision and afterimages. Around 1840, Fechner had a "nervous breakdown," resigned his position at Leipzig, and became a recluse. Additionally, Fechner had been almost blinded, presumably while looking at the sun through colored glasses during his research on afterimages. At this time, Fechner entered a state of severe depression that was to last several years and that resulted in his interests turning from physics to philosophy. The shift was in emphasis only, however, because throughout his adult life he was uncomfortable with materialism, which he called the "nightview"; it contrasted with the "dayview," which emphasized mind, spirit, and consciousness. He accepted Spinoza's double-aspect view of mind and body, and therefore believed that consciousness is as prevalent in the universe as is matter. Because he believed that consciousness cannot be separated from physical things, his position represents panpsychism; that is, all things that are physical are also conscious. Gustav Theodor Fechner Images © Archives of the History of American Psychology, the Center for the History of Psychology, the University of Akron. In his lifetime, Fechner wrote 183 articles and 81 books and edited many others (Bringmann, Bringmann, & Balance, 1992). He died in his sleep late in 1887, at the age of 86, a few days after suffering a stroke. He was eulogized by his friend and colleague Wilhelm Wundt. The Adventures of Dr. Mises Although Fechner was an outstanding scientist, there was a side of him that science could not satisfy. In addition to Fechner the materialistic scientist, there was Fechner the satirist, philosopher, and mystic. For a young scientist to express so many viewpoints, especially because many of them were seemingly incompatible with science, would have been professional suicide. So, Fechner invented a person to speak for his other half, and thus was born "Dr. Mises." The pseudonym Dr. Mises first appeared while Fechner was still a medical student. Under this pseudonym, Fechner wrote Proof That the Moon Is Made of Iodine (1821), a satire on the medical profession's tendency to view iodine as a panacea. In 1825 Dr. Mises published The Comparative Anatomy of Angels, in which it is reasoned, tongue firmly in cheek, that because the sphere is the most perfect shape and angels are perfect, angels must be spherical and cannot have legs. Marshall (1969) summarizes this argument: Centipedes have God-knows-how-many legs; butterflies and beetles have six, mammals only four; birds, who of all earthly creatures rise closest to the angels, have just two. With each developmental step another pair of legs is lost, and "Since the final observable category of creatures possesses only two legs, it is impossible that angels should have any at all." (p. 51) There followed The Little Book of Life After Death (1836), Nanna, or Concerning the Mental Life of Plants (1848), and Zend-Avesta, or Concerning Matters of Heaven and the Hereafter (1851). In all, Dr. Mises was heard from 14 times from 1821 to 1879. Fechner always used Dr. Mises to express the dayview, the view that the universe is alive and conscious. Implicit within Fechner's satire or humor was the message that the dayview should be taken seriously. Marshall (1969) makes this point concerning Zend-Avesta: Indeed, in Zoroastrian dogma, Zend-Avesta meant the "living word," and Fechner was to intend that his own Zend-Avesta should be the word which would reveal all nature to be alive. In this work Fechner argues that the earth is ensouled, just as the human being is; but the earth possesses a spirituality which surpasses that of her creatures. (p. 54) In fact, it was in Zend-Avesta that Fechner first described what would later become psychophysics: [Fechner] laid down the general outlines of his program [psychophysics] in Zend-Avesta, the book about heaven and the future life. Imagine sending a graduate student of psychology nowadays to the Divinity School for a course in immortality as preparation for advanced experimental work in psychophysics! How narrow we have become! (Boring, 1963, p. 128) In The Little Book of Life After Death, written to console a friend who had just lost a loved one, Dr. Mises described human existence as occurring in three stages. The first stage is spent alone in continuous sleep in the darkness of the mother's womb. The second stage, after birth, is spent alternating between sleeping and waking and in the company of other people. During this second stage, people often have glimpses into the third stage. These glimpses include moments of intense faith or of intuitions that cannot be explained by one's life experiences. Dr. Mises tells us that we enter the third stage by dying: "The passing from the first to the second stage is called birth; the transition from the second to the third is called death" (Fechner, 1836/1992, p. 7). Just as unborn children cannot foresee their forthcoming experiences in stage two, people cannot foresee their forth-coming experiences in stage three. In the third stage, one's soul merges with other souls and becomes part of the "Supreme Spirit." It is only during this stage that the ultimate nature of reality can be discerned. Whether as Dr. Mises or not, Fechner was always interested in spiritual phenomena. He was also interested in parapsychology and even attended several seances in which he experienced the anomalous movements of a bed, a table, and even himself. His belief and involvement in parapsychology is clearly seen in the last book he wrote as Dr. Mises, The Dayview as Compared to the Nightview (1879). Psychophysics From Fechner's philosophical interest in the relationship between the mind and the body sprang his interest in psychophysics. He wanted desperately to solve the mind-body problem in a way that would satisfy the materialistic scientists of his day. Fechner's mystical philosophy taught him that the physical and mental were simply two aspects of the same fundamental reality. Thus, as we have seen, he accepted the double aspectism that Spinoza had postulated. But to say that there is a demonstrable relationship between the mind and the body is one thing; proving it is another matter. According to Fechner, the solution to the problem occurred to him on the morning of October 22, 1850, as he was lying in bed (H. E. Adler, 1996). His insight was that a systematic relationship between bodily and mental experience could be demonstrated if a person were asked to report changes in sensations as a physical stimulus was systematically varied. Fechner speculated that for mental sensations to change arithmetically, the physical stimulus would have to change geometrically. In testing these ideas, Fechner created the area of psychology that he called psychophysics. As was mentioned, Fechner's insight concerning the relationship between stimuli and sensations was first reported in Zend-Avesta (1851). Fechner spent the next few years experimentally verifying his insight and published two short papers on psychophysics in 1858 and 1859. Then in 1860 he published his famous Elements of Psychophysics, a book that arguably launched psychology as an experimental science. As the name suggests, psychophysics is the study of the relationship between physical and psychological events. Fechner's first step in studying this relationship was to state mathematically what Weber had found and to label the expression Weber's law: where R = Reiz (the German word for "stimulus"). In Weber's research, this was the standard stimulus. = The minimum change in R that could be detected; that is, the minimum change in physical stimulation necessary to cause a person to experience a jnd. k = A constant. As we have seen, Weber found this constant to be 1/40 of R for lifted weights. Weber's law concerns the amount that a physical stimulus must change before it results in the awareness of a difference or in a change of sensation (S). Through a series of mathematical calculations, Fechner arrived at his famous formula, which he believed showed the relationship between the mental and the physical (the mind and the body): This formula mathematically states Fechner's earlier insight. That is, for sensations to rise arithmetically (the left side of the equation), the magnitude of the physical stimulus must rise geometrically (the right side of the equation). This means that as a stimulus gets larger, the magnitude of the change must become greater and greater if the change is to be detected. For example, if the stimulus (R) is 40 grams, a difference of only 1 gram can be detected; whereas if the stimulus is 200 grams, it takes a difference of 5 grams to cause a jnd. In everyday terms, this means that sensations are always relative to the level of background stimulation. If a room is dark, for example, turning on a dim light will be immediately noticed, as would a whisper in a quiet room. If a room is already illuminated, however, the addition of a dim light would go unnoticed, as would a whisper in a noisy room. However, Fechner did not believe his formula applied only to the evaluation of simple stimuli. He believed it applied to the more complex realm of human values as well: Our physical possessions ... have no value or meaning for us as inert material, but constitute only a means for arousing within us a sum of psychic values. In this respect they take the place of stimuli. A dollar has, in this connection, much less value to a rich man than to a poor man. It can make a beggar happy for a whole day, but it is not even noticed when added to the fortune of a millionaire. (1860/1966, p. 197) The JND as the Unit of Sensation Fechner assumed that as the magnitude of a stimulus increased from zero, a point would be reached where the stimulus could be consciously detected. The lowest intensity at which a stimulus can be detected is called the absolute threshold. That is, the absolute threshold is the intensity of a stimulus at or above which a sensation results and below which no detectable sensation occurs. According to Fechner, intensity levels below the absolute threshold do cause reactions, but those reactions are unconscious. In that it allowed for these negative sensations, Fechner's position was very much like those of Leibniz (petites perceptions) and Herbart (threshold of consciousness). For all three, the effects of stimulation cumulated and, at some point (the absolute threshold), was capable of causing a conscious sensation. Fechner's analysis of sensation started with the absolute threshold, but because that threshold provided only one measure, it was of limited usefulness. What Fechner needed was a continuous scale that showed how sensations above the absolute threshold varied as a function of level of stimulation. This was provided by the differential threshold, which is defined by how much a stimulus magnitude needs to be increased or decreased before a person can detect a difference. It was in regard to the differential threshold that Fechner found that stimulus intensities must change geometrically in order for sensation to change arithmetically. Given a geometric increase in the intensity of a stimulus, Fechner assumed that sensations increased in equal increments (jnds). With this assumption it was possible, using Fechner's equation, to deduce how many jnds above absolute threshold a particular sensation was at any given level of stimulus intensity. In other words, Fechner's law assumed that sensations increased in equal units (jnds) as the stimulus intensity increased geometrically beyond the absolute threshold. With his equation, Fechner believed that he had found the bridge between the physical and the psychical that he sought—a bridge that was scientifically respectable. Subsequent research demonstrated that the predictions generated by Fechner's equation were accurate primarily for the middle ranges of sensory intensities. Predictions were found to be less accurate for extremely high or low levels of physical intensity. Psychophysical Methods After establishing that mental and physical events varied systematically, and thus showing that a science of the mind is indeed possible (contrary to the beliefs of such individuals as Galileo, Comte, and Kant), Fechner employed several methods to further explore the mind-body relationship: The method of limits (also called the method of just noticeable differences): With this method, one stimulus is varied and is compared to a standard. To begin with, the variable stimulus can be equal to the standard and then varied, or it can be much stronger or weaker than the standard. The goal here is to determine the range of stimuli that the subject considers to be equal to the standard. The method of constant stimuli (also called the method of right and wrong cases): Here, pairs of stimuli are presented to the subject. One member of the pair is the standard and remains the same, and the other varies in magnitude from one presentation to another. The subject reports whether the variable stimulus appears greater than, less than, or equal to the standard. The method of adjustment (also called the method of average error): Here, the subject has control over the variable stimulus and is instructed to adjust its magnitude so that the stimulus appears equal to the standard stimulus. After the adjustment, the average difference between the variable stimulus and the standard stimulus is measured. These methods were another of Fechner's legacies to psychology, and they are still used today. Fechner's Contributions In addition to creating psychophysics, Fechner also created the field of experimental aesthetics. Between 1865 and 1876, Fechner wrote several articles attempting to quantify reactions to works of art. For example, in an effort to discover the variables that made some works of art more pleasing than others, Fechner analyzed 20,000 paintings from 22 museums (Fechner, 1871). After publishing his major work on aesthetics (1876), Fechner spent the remainder of his professional life responding to criticisms of psychophysics. For further discussion of Fechner's experimental aesthetics, see Arnheim (1985). Fechner did not solve the mind-body problem; it is still alive and well in modern psychology. Like Weber, however, he did show that it was possible to measure mental events and relate them to physical ones. Some historians have suggested that the beginning of experimental psychology is the 1860 publication of Fechner's Elements. Although a good case can be made, most agree that another important step had to be taken before psychology could emerge as a full-fledged science: Psychology needed to be formalized as a separate discipline apart from both philosophy and physiology. As we will see in Chapter 9, it was Wilhelm Wundt who took that step.

objective and subjective differences

In 1795, astronomer Nevil Maskelyne and his assistant David Kinnebrook were setting ships' clocks according to when a particular star crossed a hairline in a telescope. Maskelyne noticed that Kinnebrook's observations were about a half-second slower than his. Kinnebrook was warned of his "error" and attempted to correct it. Instead, however, the discrepancy between his observations and Maskelyne's increased, and Kinnebrook was relieved of his duty. Twenty years later, the incident came to the attention of the German astronomer Friedrich Bessel (1784-1846), who speculated that the error had not been due to incompetence but to individual differences among observers. Bessel set out to compare his observations with those of his colleagues and indeed found systematic differences among them. This was the first reaction-time study, and it was used to correct differences among observers. This was done by calculating personal equations. For example, if 8/10ths of a second was added to Kinnebrook's reaction time, his observations could reliably be equated with Maskelyne's. Bessel found systematic differences among individuals and a way to compensate for those differences, but his findings did not have much impact on the early development of psychology. As we will see, the early experimental psychologists were interested in learning what was true about human consciousness in general; therefore, individual differences found among experimental subjects were often attributed to sloppy methodology. Later in psychology's history (after Darwin), the study of individual differences was to be of supreme importance. Of course, the demonstration of any discrepancy between a physical event and a person's perception of that event was of great concern to the natural scientists, who viewed their jobs as accurately describing and explaining the physical world. The problem created by Galileo's and Locke's distinction between primary and secondary qualities could be avoided by simply concentrating on primary qualities—that is, concentrating on events for which there was a match between their physical qualities and the sensations that they create. It was becoming increasingly clear, however, that the mismatch between physical events and the perceptions of those events was widespread. Newton (1704/1952) had observed that the experience of white light is really a composite of all colors of the spectrum, although the individual colors themselves are not perceived. In 1760 Van Musschenbroek discovered that if complementary colors such as yellow and blue are presented in proper proportions on a rapidly rotating disc, an observer sees neither yellow nor blue but gray. It was evident that often there was not a point-to-point correspondence between physical reality and the psychological experience of that reality. Because the most likely source of the discrepancy was the responding organism, physical scientists had reason to be interested in the biological processes by which organisms interact with the physical world. Physiologists studied the nature of nerves, neural conduction, reflexive behavior, sensory perception, brain functioning, and, eventually, the systematic relationship between sensory stimulation and sensation. It was the work of physiologists that provided the link between the questions of philosophy and the soon-to-be science of psychology. Thus, to a large extent, both the content of what was to become psychology and the methodologies used to explore that content were furnished by physiology.

James Mill

James Mill (1773-1836), a Scotsman, was educated for the ministry at the University of Edinburgh. In 1802 he moved to London to start a literary career, becoming the editor of the Literary Journal and writing for various periodicals. With the publication of perhaps his greatest tome, The History of British India, which he began writing in 1806 and finished in 1817, Mill entered a successful career with the East India Company. Mill's most significant contribution to psychology was Analysis of the Phenomena of the Human Mind, which originally appeared in 1829 and was revised under the editorship of his son John Stuart Mill in 1869. We use the 1869 edition of Analysis as our primary source of Mill's ideas. Mill's Analysis is regarded as the most complete summary of associationism ever offered. Mill's Analysis of Association Following Hartley, Mill attempted to show that the mind consisted of only sensations and ideas held together by contiguity. Also following Hartley, Mill said that complex ideas are composed of simple ideas. However, when ideas are continuously experienced together, the association among them becomes so strong that they appear in consciousness as one idea: The word gold, for example, or the word iron, appears to express as simple an idea, as the word colour, or the word sound. Yet it is immediately seen, that the idea of each of those metals is made up of the separate ideas of several sensations; colour, hardness, extension, weight. Those ideas, however, present themselves in such intimate union, that they are constantly spoken of as one, not many. We say, our idea of iron, our idea of gold; and it is only with an effort that reflecting men perform the decomposition. ... It is to this great law of association, that we trace the formation of our ideas of what we call external objects; that is, the ideas of a certain number of sensations, received together so frequently that they coalesce as it were, and are spoken of under the idea of unity. Hence, what we call the idea of a tree, the idea of a stone, the idea of a horse, the idea of a man. (J. S. Mill, 1869/1967, pp. 91-93) In fact, all things we refer to as external objects are clusters of sensations that have been consistently experienced together. In other words, they are complex ideas and, as such, are reducible to simple ideas. Mill explicitly pointed out what was more implicit in the philosophies of other "Newtonians of the mind," like Locke, Berkeley, Hume, and Hartley. That is, no matter how complex an idea becomes, it can always be reduced to the simple ideas of which it is constructed. Simple ideas can be added to other simple ideas, making a complex idea; complex ideas can be added to complex ideas, making a still more complex idea; and so forth. Still, at the base of all mental experience are sensations and the ideas they initiate. Mill believed that two factors caused variation in strengths of associations: vividness and frequency. That is, the more vivid sensations or ideas form stronger associations than less vivid ones do; and more frequently paired sensations and ideas form stronger associations than do those paired less frequently. Mill referred to frequency or repetition as "the most remarkable and important cause of the strength of our associations" (J. S. Mill, 1869/1967, p. 87). As far as vividness is concerned, Mill said that (1)sensations are more vivid than ideas, and therefore, the associations between sensations are stronger than those between ideas; (2)sensations and ideas associated with pleasure or pain are more vivid and therefore form stronger associations than sensations and ideas not related to pleasure or pain; and (3)recent ideas are more vivid and therefore form stronger associations than more remote ideas. Utilitarianism and Associationism In 1808, James Mill met Jeremy Bentham (1748-1832), and the two became close, lifelong friends. Bentham was the major spokesman for the British political and ethical movement called utilitarianism. Bentham rejected all metaphysical and theological arguments for government, morality, and social institutions and instead took the ancient concept of hedonism (from the Greek word hedone, meaning "pleasure") and made it the cornerstone of his political and ethical theory: Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do. On the one hand the standard of right and wrong, on the other the chain of causes and effects, are fastened to their throne. They govern us in all we do, in all we say, in all we think: every effort we can make to throw off their subjection will serve but to demonstrate and confirm it. (Bentham, 1781/1988, p. 1) Thus, Bentham defined human happiness entirely in terms of the ability to obtain pleasure and avoid pain. One could approach ethical matters using a sort of hedonic calculus—that is, by calculating the pleasures and pains involved in order to determine the correct action. Similarly, the best government was defined as one that brought the greatest amount of happiness to the greatest number of people. Although utilitarianism was implicit in the philosophies of a number of earlier thinkers, it was Bentham who applied hedonism to society as a whole. Bentham's efforts were highly influential and resulted in a number of reforms in legal and social institutions. Jeremy Bentham Images geogphotos/Alamy Stock Photo Bentham was a fascinating fellow, who entered Queen's College at Oxford at age 12, earning his bachelor's degree at 15, and a master's by 18. If you have had a philosophy course, you may recall that his utilitarian approach, along with the axiomatic approach of Kant (who we will consider in the next chapter), forms the basis of almost all modern approaches to ethics. A lifelong eccentric, Bentham's will called for his public dissection and subsequent mummification. Both directly and indirectly, Bentham is also associated with the creation of the University College of London, where his remains are on display in a special cabinet called the Auto-Icon. On select occasions, his Auto-Icon joins meetings of the ruling College Council, where he is listed as "present but not voting." James Mill's Influence Mill's Analysis is regarded as the most complete summary of associationism. As we have seen, he attempted to show that the mind consisted of only sensations and ideas held together by contiguity. He insisted that any mental experience could be reduced to the simple ideas that made it up. Thus, he gave us a conception of the mind based on Newtonian physics. For Newton, the universe could be understood as consisting of material elements held together by physical forces and behaving in a predictable manner. For Mill, the mind consisted of mental elements held together by the laws of association; therefore, mental experience was as predictable as physical events. James Mill Images © Hulton Archive/Getty Images James Mill's professed goal was to provide the details of associationism that were lacking in Hartley's account. This he did, and in so doing, he carried associationism to its logical conclusion. In any case, the mind as viewed by Mill (and by Hartley) was completely passive; that is, it had no creative abilities. Association was the only process that organized ideas, and it did so automatically. This conception of the mind essentially ended with James Mill. In fact, James Mill's son John Stuart Mill was among the first to revise the purely mechanistic, elementistic view of his father.

John Stuart Mill

James Mill's interest in psychology was only secondary. He was a social reformer and, like the earlier empiricists, he believed social, political, and educational change is best facilitated by an understanding of human nature. He believed that utilitarianism, coupled with associationism, justified a radical, libertarian political philosophy. James Mill and his followers were quite successful in bringing about substantial social change. He also tried his theory of human nature on a smaller, more personal scale by using it as a guide in rearing his son John Stuart Mill (1806-1873). James Mill's attempt at using associative principles in raising his son must have been at least partially successful because John Stuart had learned Greek by the time he was three years old, Latin and algebra by age 8, and formal logic by age 12. Father and son began each morning with a walk in the country and a discussion of the previous day's assigned readings. Perhaps as a result of his father's intense educational practices, J. S. Mill also suffered several bouts of depression in his lifetime. Or, perhaps, it was also because, as he noted in his autobiography (1873/1969), his parents lacked tenderness toward each other and their children. However, J. S. Mill himself was able to have at least one loving relationship. He met Harriet Taylor when he was 25 and she was 23. At the time, Harriet was married with two children, and for more than 20 years J. S. Mill's relationship with Harriet was close but platonic. In 1851, two years after Harriet was widowed, she and J. S. Mill were married. Alas, Harriet died just seven years later at the age of 50. J. S. Mill's most famous work was A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence, and the Methods of Scientific Investigation (1843). This book was an immediate success, went through eight editions in Mill's lifetime, and remained a best seller throughout the 19th century. Mill's book was considered must reading for any late-19th-century scientist. The following summary of Mill's work uses the eighth edition of his System of Logic, which appeared in 1874. In his An Examination of Sir William Hamilton's Philosophy (1865), J. S. Mill responded to criticisms of his philosophy and elaborated and defended the views of human nature he had presented in his System of Logic. In 1869 he published a new edition of his father's Analysis, adding numerous footnotes of his own that extended and clarified his father's views on associationistic psychology and sometimes criticized his father's ideas. J. S. Mill did as much as anyone at the time to facilitate the development of psychology as a science. This he did by describing the methodology that could be used in a science of human nature. In fact, he believed that the lawfulness of human thought, feeling, and action was entirely conducive to scientific inquiry. Mental Chemistry versus Mental Physics In most important respects, J. S. Mill accepted his father's brand of associationism. J. S. Mill believed that (1)every sensation leaves in the mind an idea that resembles the sensation but is weaker in intensity (J. S. Mill called ideas secondary mental states, sensations being primary); (2)similar ideas tend to excite one another (James Mill had reduced the law of similarity to the law of frequency, but J. S. Mill accepted it as a separate law); (3)when sensations or ideas are frequently experienced together, either simultaneously or successively, they become associated (law of contiguity); (4)more vivid sensations or ideas form stronger associations than do less vivid ones; and (5)strength of association varies with frequency of occurrence. With only the minor exception of the law of similarity, this list summarizes James Mill's notion of "mental physics" or "mental mechanics." John Stuart took issue with his father on one important point, however. Instead of agreeing that complex ideas are always aggregates of simple ideas, he proposed a type of mental chemistry. He was impressed by the fact that chemicals often combine and produce something entirely different from the elements that made them up, such as when hydrogen and oxygen combine to produce water. Also, Newton had shown that when all the colors of the spectrum were combined, white light was produced. J. S. Mill believed that the same kind of thing happens in the mind. That is, it was possible for elementary ideas to fuse and to produce an idea that was different from the elements that made it up. J. S. Mill's contention that an entirely new idea, one not reducible to simple ideas or sensations, could emerge from contiguous experiences, emancipated associationistic psychology from the rigid confines of mental mechanics. However, if one is seeking an active, autonomous mind, one must look elsewhere. When a new idea does emerge from the synthesis of contiguous ideas or sensations, it does so automatically. Just as the proper combination of hydrogen and oxygen cannot help but become water, a person experiencing the rapid, successive presentation of the primary colors cannot help but experience white. Toward a Science of Human Nature Others before him (such as Locke, Hume, and Hartley) had as their goal the creation of a mental science on par with the natural sciences. It was J. S. Mill, however, speaking from the vantage point of perhaps the most respected philosopher of science of his day, who contributed most to this development of psychology as a science. J. S. Mill began his analysis by attacking the common belief that human thoughts, feelings, and actions are not subject to scientific investigation in the same way that physical nature is. He stressed the point that any system governed by laws is subject to scientific scrutiny, and this is true even if those laws are not presently understood. Mill gave the example of meteorology. He indicated that no one would disagree that meteorological phenomena are governed by natural laws, and yet such phenomena cannot be predicted with certainty, only probabilistically. Even though a number of the basic laws governing weather are known (such as those concerning heat, electricity, vaporization, and elastic fluids), a number are still unknown. Also, observing how all causes of weather interact to cause a meteorological phenomenon at any given time is extremely difficult, if not impossible. Thus, meteorology is a science because its phenomena are governed by natural laws, but it is an inexact science because knowledge of those laws is incomplete and measurement of particular manifestations of those laws is difficult. Sciences, then, can range from those whose laws are known and the manifestations of those laws easily and precisely measured to those whose laws are only partially understood and the manifestations of those laws measured only with great difficulty. In the latter category, Mill placed sciences whose primary laws are known and, if no other causes intervene, whose phenomena can be observed, measured, and predicted precisely. However, secondary laws often interact with primary laws, making precise understanding and prediction impossible. Because the primary laws are still operating, the overall, principal effects will still be observable, but the secondary laws create variations and modifications that cause predictions to be probabilistic rather than certain. Mill (1843/1874) gave the example of tidology: It is thus, for example, with the theory of the tides. ... As much of the phenomena as depends on the attraction of the sun and moon is completely understood, and may, in any, even unknown, part of the earth's surface, be foretold with certainty; and the far greater part of the phenomena depends on those causes. But circumstances of a local or causal nature, such as the configuration of the bottom of the ocean, the degree of confinement from shores, the direction of the wind, etc., influence, in many or in all places, the height and time of the tide; and a portion of these circumstances being either not accurately knowable, not precisely measurable, or not capable of being certainly foreseen, the tide in known places commonly varies from the calculated result of general principles by some difference. ... Nevertheless, not only is it certain that these variations depend on causes, and follow their causes by laws of unerring uniformity. ... General laws may be laid down respecting the tides, predictions may be founded on those laws, and the result will in the main, though often not with complete accuracy, correspond to the predictions. (p. 587) John Stuart Mill Images Library of Congress, Prints & Photographs Division, Reproduction number LC-USZ62-76491 (b&w film copy neg.) Thus, meteorology and tidology are sciences, but they are not exact sciences. An inexact science, however, might become an exact science. For example, astronomy became an exact science when the laws governing the motions of astronomical bodies became sufficiently understood to allow prediction of not only the general courses of such bodies but also apparent aberrations. It is the inability of a science to deal with secondary causation that makes it inexact. Mill viewed the science of human nature (psychology) as roughly in the same position as tidology. The thoughts, feelings, and actions of individuals cannot be predicted with great accuracy because we cannot foresee the circumstances in which individuals will be placed. This in no way means that human thoughts, feelings, and actions are not caused; it means that the primary causes of thoughts, feelings, and actions interact with a large number of secondary causes, making accurate prediction extremely difficult. However, the difficulty is understanding and predicting the details of human behavior and thought, not predicting its more global features. Just as with the tides, human behavior is governed by a few primary laws, and that fact allows for the understanding and prediction of general human behavior, feeling, and thought. What the science of human nature has then is a set of primary laws that apply to all humans and that can be used to predict general tendencies in human thought, feeling, and action. What the science of human behavior does not have is a knowledge of how its primary laws interact with secondary laws (individual characters and circumstances) to result in specific thoughts, feelings, and actions. Mill believed that it would just be a matter of time before "corollaries" would be deduced from the primary (universal) laws of human nature, which would allow for more refined understanding and prediction of human thought, feeling, and action. What are these primary (universal) laws of human nature on which a more exact science of human nature will be deduced? They are the laws of the mind by which sensations cause ideas and by which ideas become associated. In other words, they are the laws established by the British empiricists, in general, but more specifically by Hume, Hartley, James Mill, and, of course, J. S. Mill with his notion of mental chemistry. J. S. Mill's Proposed Science of Ethology In Chapter 5, Book VI, of his System of Logic, Mill argued for the development of a "science of the formation of character," and he called this science ethology. It should be noted that Mill's proposed science of ethology bore little resemblance to modern ethology, which studies animal behavior in the animal's natural habitat and then attempts to explain that behavior in evolutionary terms (as we will see in Chapter 18). As Mill saw it, ethology would be derived from a more basic science of human nature. That is, first the science of human nature (psychology) would discover the universal laws according to which all human minds operate, and then ethology would explain how individual minds or characters form under specific circumstances. Putting the matter another way, we can say that the science of human nature provides information concerning what all humans have in common (human nature), and ethology explains individual personalities (individual differences). What Mill was seeking, then, was the information necessary to convert psychology from an inexact science, like tidology, into an exact science. In other words, he wanted to explain more than general tendencies; he also wanted to explain the subtleties of individual behavior in specific circumstances. It is interesting that Mill did little more than outline his ideas for ethology. He never personally attempted to develop such a science himself, and although most other sections of his System of Logic were substantially revised during its many editions, the section on ethology was never developed further or substantially modified. According to Leary (1982), Mill's attempt to develop a science of ethology failed because the science of human nature from which it was to be deduced was itself inadequate. Mill's theory of human nature was excessively intellectual. That is, it stressed how ideas become associated. It is difficult to imagine how something like character (personality), which to a large extent is emotional, could be deduced from a philosophy stressing the association of ideas. Eventually, Mill's ethology reemerged in France as the study of individual character. The French approach placed greater emphasis on emotional factors than Mill and his followers had, and their approach was somewhat more successful. Social Reform Like his father, J. S. Mill was a dedicated social reformer. His causes included freedom of speech, representative government, and the emancipation of women. He began his book The Subjection of Women (1861/1986) with the following statement: The object of this Essay is to explain, as clearly as I am able, the grounds of an opinion which I have held from the very earliest period when I had formed any opinions at all on social or political matters, and which, instead of being weakened or modified, has been constantly growing stronger by the progress of reflection and the experience of life: That the principle which regulates the existing social relations between the two sexes—the legal subordination of one sex to the other—is wrong in itself, and now one of the chief hindrances to human improvement; and that it ought to be replaced by a principle of perfect equality, admitting no power or privilege on the one side, nor disability on the other. (p. 7) J. S. Mill went on to note that male chauvinism was often defended on the basis of natural law (females are biologically inferior to males) or on the basis of some religious belief. Mill considered both defenses invalid and believed that a sound science of human nature (psychology) would provide the basis for social equality. Sexism, he said, would fall "before a sound psychology, laying bare the real root of much that is bowed down to as the intention of nature and the ordinance of God" (1861/1986, p. 10). As might be expected, Mill's book was met with considerable (male) hostility. Like his father, J. S. Mill embraced Bentham's utilitarianism: One should always act in a way that brings the greatest amount of pleasure (happiness) to the greatest number of people. This principle should consider both short and long-term pleasure and treat the happiness of others as equal in value to our own. Societies can be judged by the extent to which they allow the utilitarian principle to operate. Although J. S. Mill accepted Bentham's general principle of utilitarianism, his version of it differed significantly from Bentham's. In Bentham's calculation of happiness, all forms of pleasure counted equally. For example, sublime intellectual pleasures counted no more than eating a good meal. J. S. Mill disagreed, saying that, for most humans, intellectual pleasures were far more important than the biological pleasures we share with nonhuman animals. J. S. Mill said, "It is better to be a human dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied" (1861/1979, p. 10).

the rise of experimental psychology

The very important difference between what is physically present in the world and what is experienced psychologically had been recognized and agonized over for centuries. This was the distinction that had caused Galileo to conclude that a science of psychology was impossible and Hume to conclude that we could know nothing about the physical world with certainty. Kant amplified this distinction when he claimed that the mind embellished sensory experience, and Helmholtz reached the same conclusion with his concept of unconscious inference. With advances in science, much had been learned about the physical world—including about the physical stimulation of the sense receptors, which convert that stimulation into nerve impulses, and about the brain structures where those impulses terminate. There was never much doubt about the existence of consciousness; the problem was in determining what we were conscious of and what caused that consciousness. By now it was widely believed that conscious sensations were triggered by brain processes, which themselves were initiated by sense reception. But the question remained: How are the two domains (conscious mental events and the physiological processes of our sensory system) related?

paul broca

On September 13, 1848, Phineas Gage was working as a railroad construction supervisor when an explosion blew an iron tamping rod through his skull. As shown in Figure 8.2, the sizeable rod entered just below his left eye and exited through the top of his head. Amazingly, not only did Gage survive the accident but fully recovered physically. What was changed, however, was Gage's personality. Dr. John Harlow (1868) observed: He is fitful, irreverent, indulging at times in the grossest profanity (which was not previously his custom), manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires, at times pertinaciously obstinate, yet capricious and vacillating, devising many plans of future operation, which are no sooner arranged than they are abandoned. ... His friends and acquaintances said he was "no longer Gage." (pp. 339-340) Figure 8.2 The skull of Phineas Gage. An illustration shows skull of Phineas Gage in four different views. Left side view shows a rod inside the skull, back view shows one half of the skull with the two holes and other half shows an intact skull, frontal view also shows the rod in the skull. The rod in two halves is displayed. Enlarge Image Courtesy of the National Library of Medicine Modern work based on Gage's skull and Harlow's observations (Damasio, Grabowski, Frank, Galabruda, & Damasio, 1994) have linked the ablated areas of the brain with corresponding expected behavioral changes. Another person to make such postmortem correlations was Paul Broca (1824-1880). Using the clinical method, Broca cast doubt on Flourens's conclusion that the cortex acted as a whole. Boring (1950) described Broca's observation: Broca's famous observation was in itself very simple. There had in 1831 been admitted at the Bicetre, an insane hospital near Paris, a man whose sole defect seemed to be that he could not talk. He communicated intelligently by signs and was otherwise mentally normal. He remained at the Bicetre for thirty years with this defect and on April 12, 1861, was put under the care of Broca, the surgeon, because of a gangrenous infection. Broca for five days subjected him to a careful examination, in which he satisfied himself that the musculature of the larynx and articulatory organs was not hindered in normal movements, that there was no other paralysis that could interfere with speech, and that the man was intelligent enough to speak. On April 17 the patient—fortunately, it must have seemed, for science—died; and within a day Broca had performed an autopsy, discovering a lesion in the third frontal convolution of the left cerebral hemisphere, and had presented the brain in alcohol to the Societe d'Anthropologie. (p. 71) Actually, even Broca was not the first to suggest that clinical observations be made and then to use autopsy examinations to locate a brain area responsible for a disorder. For example, the French scientist Jean-Baptiste Bouillaud (1796-1881) had done so as early as 1825. Using the clinical method on a large number of cases, Bouillaud reached essentially the same conclusion concerning the localization of a speech area on the cortex that Broca was to reach later using the same technique. Why, then, do we credit Broca with providing the first credible evidence for cortical localization and not Bouillaud? It is primarily because Bouillaud had been closely associated with phrenology and, by the time that Broca made his observations, "The scientific community [was] overly cautious about anything or anyone associated in any way with Gall or phrenology" (Finger, 1994, p. 37). In any case, subsequent research confirmed Broca's observation that a portion of the left cortical hemisphere is implicated in speech articulation or production, and this area has been named Broca's area. In 1874, just over a decade after Broca's discovery, the German neurologist Carl Wernicke (1848-1905) discovered a cortical area, near Broca's area, responsible for speech comprehension. This area on the left temporal lobe of the cortex has been named Wernicke's area. Broca's localizing of a function on the cortex supported the phrenologists and damaged Flourens's contention that the cortex acted as a unit. Unfortunately for the phrenologists, however, Broca did not find the speech area to be where the phrenologists had said it would be. Other aspects of Broca's work were less impressive. Reflecting the Zeitgeist, he engaged in craniometry (the measurement of the skull and its characteristics) in order to determine the relationship between brain size and intelligence. He began his research with a strong conviction that there was such a relationship, and, not surprisingly, he found evidence for it. In 1861 Broca summarized his findings: Paul Broca Images Courtesy of the National Library of Medicine In general, the brain is larger in mature adults than in the elderly, in men than in women, in eminent men than in men of mediocre talent, in superior races than in inferior races. ... Other things equal, there is a remarkable relationship between the development of intelligence and the volume of the brain. (Gould, 1981, p. 83) Broca was aware of several facts that contradicted his theory: There existed an abundance of large-brained criminals, highly intelligent women, and small-brained people of eminence; and Asians, despite their smaller average brain size, were generally more intelligent than ethnic groups with larger brains. In spite of these contradictions, and in the absence of reliable, supportive evidence, Broca continued to believe in the relationship between brain size and intelligence until his death. Then it was discovered that his brain weighed 1,424 grams: "A bit above average to be sure, but nothing to crow about" (Gould, 1981, p. 92). So what is the relationship between brain size and intelligence? Deary (2001) first reviews the contemporary research on the topic and then concludes, "There is a modest association between brain size and ... intelligence. People with bigger brains tend to have higher mental test scores. We do not know yet why this association occurs" (p. 45). Thus, it appears that Broca and other craniometricians were not totally wrong. However, their claims far exceeded their evidence. As we will see in Chapter 10, the tendency to "scientifically" confirm personal beliefs concerning intelligence continued even when measures of intelligence became more sophisticated.

space perception and color vision

On the matter of space perception, we have seen that Helmholtz believed that it slowly develops from experience as physiological and psychological events are correlated. Hering, however, believed that, when stimulated, each point on the retina automatically provides three types of information about the stimulus: height, left-right position, and depth. Following Kant, Hering believed that space perception exists a priori. For Kant, space perception was an innate category of the mind; for Hering, it was an innate characteristic of the eye. This controversy about perception became the Helmholtz-Hering debate, and in various guises continues even today. After working on the problem of space perception for about 10 years, Hering turned to color vision. Hering observed a number of phenomena that he believed either were incompatible with the Young-Helmholtz theory or could not be explained by it. He noted that certain pairs of colors, when mixed together, give the sensation of gray. This was true for red and green, blue and yellow, and black and white. He also observed that a person who stares at red and then looks away experiences a green afterimage; similarly, blue gives a yellow afterimage. Hering also noted that individuals who have difficulty distinguishing red from green could still see yellow; also, it is typical for a color-blind person to lose the sensation of both red and green, not just one or the other. All these observations at least posed problems for the Young-Helmholtz theory, if they did not contradict it. To account for these phenomena, Hering theorized that there are three types of receptors on the retina but that each could respond in two ways. One type of receptor responds to red-green, one type to yellow-blue, and one type to black-white. Red, yellow, and white cause a "tearing down," or a catabolic process, in their respective receptors. Green, blue, and black cause a "building up," or an anabolic process, in their respective receptors. If both colors to which a receptor is sensitive are experienced simultaneously, the catabolic and anabolic processes are canceled out, and the sensation of gray results. If one color to which a receptor is sensitive is experienced, its corresponding process is depleted, leaving only its opposite to produce an afterimage. Finally, Hering's theory explained why individuals who cannot respond to red or green can still see yellow and why the inability to see red is usually accompanied by an inability to see green. Ewald Hering Images Reprinted by permission of Open Court Publishing Company, a division of Carcus Publishing Company, from Philosophical Portrait Series, © 1898 by Open Court Publishing Company. For nearly 50 years, lively debate ensued between those accepting the Young-Helmholtz theory and those accepting Hering's, and the matter is still far from settled. The current view is that the Young-Helmholtz theory is correct in that there are retinal cells sensitive to red, green, and blue but that there are neural processes beyond the retina that are more in accordance with Hering's proposed metabolic processes.

early research on brain functioning

Toward the end of the 18th century, it was widely believed that a person's character could be determined by analyzing his or her facial features, body structure, and habitual patterns of posture and movement. Such an analysis was called physiognomy (Jahnke, 1997). One version of physiognomy that became extremely popular was phrenology.


Ensembles d'études connexes

Purchasing Chapter 3 Distribution Systems

View Set

Composers of the Romantic Period

View Set