Exam 4/Final Exam

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

cognitive dissonance theory The concept that when a prophesy or some other belief fails to be true, a believer will experience an intense state of discomfort until the discrepancy with the belief is resolved. emotion A mental state elicited by an event that a person has appraised as relevant to fulfilling his or her needs, and that motivates behavior to fulfill those needs and reach the individual's goals. appraisal The interpretation or evaluation of a situation that leads one to experience an affective state, such as a mood or emotion. affect The many different shades of mood, feeling, and evaluative experience related to emotion.

An emotion is a mental state elicited by an event that a person has appraised as relevant to fulfilling his or her needs, and that motivates behavior to fulfill those needs and reach the individual's goals (Schirmer, 2015). To illustrate, suppose you have been dating someone for two years and learn that your partner, with whom you have shared personal information, has divulged some of that information to someone else. Your goal is to maintain trust and closeness with your partner, yet he or she has betrayed your trust. In such a case, you are likely to experience the emotion of anger. Emotions are often marked by expressive behaviors, subjective experience, motivated dispositions to behave a certain way, and changes in physiological arousal—that is, changes in bodily functions (Buck, 1988). For example, if you responded to this situation with anger, your lips might be compressed together tightly, or your mouth might be set in a square pattern showing your teeth (a common facial expression of anger). Your muscles and entire body may feel tense, and a hot feeling may pour over your body (subjective experience). You may want to yell at or hit the person who betrayed you (behavioral disposition), and your heart may beat faster (a physiological change, showing increased arousal). If you interpreted the situation another way, however, you might experience a different emotion. What if you thought about how this betrayal could end your relationship? Now, you might feel sad at your anticipated loss. Again, your goal of maintaining a close relationship is thwarted, but now, focusing on the imminent loss of love and affection, you feel sad. How you interpret or appraise a situation is an important factor contributing to which emotion you experience, but it does not necessarily cause or determine that emotion. Although the process of experiencing an emotion often begins with an appraisal, it almost simultaneously involves other components, such as physiological, motor, and psychological changes (Ellsworth, 2013). Appraisal theories generally assume that emotional experience has many variations that differ by degree. Thus, a person who is betrayed might feel sadness, anger, or something in between. The many different shades of mood, feeling, and evaluative experience related to emotion are generally referred to as affect. In contrast to supporters of appraisal theory, proponents of basic or discrete emotions theories assume that we have a small set of basic emotions that fall into distinct categories. One commonly defined set of discrete emotions includes happiness, interest, anger, sadness, disgust, and fear (Izard, 2007). These discrete emotions are natural kinds, in that they have a biological or evolutionary basis. As such, each emotion is thought to have its own form of facial and bodily expression, its own pattern of physiological response, and its own action tendency. One type of evidence for discrete emotions comes from research suggesting that people from different cultures share the same discrete, facial expressions for the six basic emotions shown in Figure 9.2. Can you recognize the emotion displayed in each? Not only can people from one culture typically recognize the facial expressions of people from other cultures but each discrete emotion is also associated with a specific pattern of facial expression (Ekman, 1994). Other evidence supporting discrete emotions comes from research on the action tendencies that people report as part of specific emotions (Roseman, Wiest, & Swartz, 1994). If emotions evolved through natural selection, then we would expect them to serve adaptive functions. Roseman and colleagues found that the dominant action tendency for fear was the urge to run away and that the motivational goal was to get to a safe place. The distinction between positive and negative emotions also helps us understand their adaptive functions. For example, the two positive emotions of interest and joy function early in life to help the infant learn and explore the environment (Izard, 2007). In contrast, negative emotions such as fear, anger, disgust, and sadness, which occur less frequently, function to interrupt some behavior or to signal that action is needed in response to an aversive or threatening event.

The practices discussed in this section have several of the "marks" of pseudoscience or have not been adequately studied to determine their effectiveness. Some meet many of the criteria outlined in Table 5.1, indicating that a field or practice is pseudoscientific. At the very least, both professionals and clients should be skeptical about using such practices and treatments. Even if the treatment is harmless, a client who receives such a treatment may be less likely to get help than the same client would receive from a therapist who employs a more effective, evidence-based approach. Devoting resources to pseudoscientific therapies certainly wastes time and money that could have been better spent on more effective treatment. Freudian psychoanalysis, developed by Sigmund Freud, was perhaps the first psychotherapy to be underpinned by an elaborate theory of how it worked. It may surprise you to learn that some who have examined psychoanalysis have persuasively argued that Freud's famous therapeutic approach is pseudoscientific (e.g., Blitz, 1991; Popper, 1959; Van Rillaers, 1991). The philosopher of science Karl Popper found in discussions with Alfred Adler, one of Freud's students, that Adler could use psychoanalytic theory to account for observations from any case, even when data contradicted the original prediction. Recall that true scientists make specific predictions that are testable and falsifiable. In contrast, Adler and other psychoanalysts sometimes expanded the meaning of predictions or hypotheses to account for data that did not fit their predictions. Adler considered the ability of psychoanalysts to explain any observation to be a strength of the theory, but Popper argued that it was just the opposite. According to Popper, this sort of "after-the-fact" explanation made psychoanalysis incapable of being falsified or refuted. By shielding itself from falsification, he claimed, psychoanalysis was a pseudoscience. Closer examination of psychoanalytic theory shows that it is the theory itself that tends to make it hard to test and falsify. One key assumption is that psychological problems are due to unconscious motives resulting from traumas, conflicts, and other problems that may have occurred in a person's childhood. The psychoanalyst is charged with helping the patient become aware of these unconscious motives so that the patient can gain insight and overcome the obstacles they impose. This process is difficult because the therapist knows about the unconscious motives and conflicts only through interpretation of the statements and actions of the patient, who is not consciously aware of them. Freud also interpreted dreams that patients reported as a way to uncover unconscious material, applying a psychoanalytic interpretation of what he thought were symbolic elements of the dream. Subsequent research on dreams has shown that they are often about mundane, everyday events that lack any particular symbolic meaning. Sometimes patients would object to Freud's interpretations, but Freud argued that a patient's resistance to his interpretation indicated that he was getting closer to uncovering the unconscious origin of the problem. Of course, an alternative explanation is that Freud was showing confirmation bias and using the rather general ideas of psychoanalytic theory to impose his own symbolic interpretations of the statements and dreams that he assumed reflected unconscious motives of which neither he nor his client was aware. Another problem with psychoanalysis is that it has been mostly supported by case study data, a relatively low-quality kind of evidence—and sometimes not even by good case study data. For example, one of Freud's important cases was that of Dora; by his own admission, this case was based largely on his memory of their exchanges (Eisner, 2000). Moreover, Freud was not particularly interested in verifying the history of his patients; this failure to confirm the facts of a case further weakened his claims that some unconscious event from his patient's past was causing the problem. It is not even clear whether Freud accurately diagnosed Dora, given modern ideas about psychological disorders (Eisner, 2000). Despite the fact that psychoanalysis has been supported primarily by case study data and almost no higher-quality data from experiments, followers of Freud continue to maintain that psychoanalytic theory is correct and that psychoanalysis is effective.

Freudian psychoanalysis The psychodynamic approach to the mind and psychotherapy developed by Sigmund Freud.

hallucination A cognitive error in which a person experiences something as if it were perceived, even though nothing in the environment directly corresponds to it.

Have you ever heard someone call your name, only to discover no one was there? Although this is a mild form of hallucination, it is still a hallucination. Hallucinations are a kind of cognitive error in which a person experiences something so real, it is as if it were perceived—but that stimulus is not truly there. These are not simply perceptual errors, though: Unlike perception, hallucinations often do not correspond directly to anything in the environment at the time and are more like things imagined. Like imagined stimuli, they are internally generated because no external stimulation of a sensory organ corresponds to what has initiated the experience (Aleman & Laroi, 2008). Yet, unlike mental images that you can willfully change, hallucinations happen to you as if you are perceiving something. Although you could imagine a face and then change it in your mind, you could not willfully change your hallucination of a face. Hearing voices that are not really there may seem particularly worrisome because people with serious mental disorders, such as schizophrenia, often experience auditory hallucinations and people who take hallucinogenic drugs tend to have visual hallucinations. But hallucinations occur in normal, nondrugged people, especially those who report experiencing stress and poor sleep quality. In one study, 71% of college students reported having had at least one such experience (Posey & Losch, 1983). In another study, almost half of the students who reported having a verbal hallucination said they experienced at least one per month (Barrett & Etheridge, 1992). Another normal condition in which hallucinations occur is when people begin to fall asleep. As they do this, they may enter a hypnagogic state, a state of reverie in which people sometimes experience wild imaginings that seem real. Sometimes normal, nondrugged individuals hallucinate when faced with physically demanding conditions. Shermer (2011) reported accounts of competitors hallucinating during the grueling "Ride Across America" bicycle race. Competitors in this race ride as many as 350 miles per day, sometimes under extreme weather conditions and with little sleep. Some riders reported seeing hieroglyphics spread across the road and mythical creatures in splotches on the pavement. Hallucinations are associated with a variety of medical and other conditions, too (Sacks, 2012). Some people who lose their hearing experience auditory hallucinations. Those who become blind often have visual hallucinations. Some people who are deprived of sensory stimulation also hallucinate. Temporal lobe epilepsy can produce hallucinations and religious visions (Aleman & Laroi, 2008). Migraines, too, can produce an aura—a kind of zigzag, wavy pattern in the visual field that usually lasts a few minutes and may signal the onset of a migraine headache. For example, I once had an "optical migraine" during class. As I went over a quiz with students, a wavy, kaleidoscopic pattern progressively filled my visual field until I could not see the quiz. I described the experience I was having to students, who were surprised when I discontinued class (I hardly ever call off class). Fortunately, the hallucination of light patterns subsided after a few minutes without the migraine headache. Tracing the development of the idea of hallucinations can help explain why they are important to critical and scientific thinking. It was not until the eighteenth century that hallucinations became associated with errors of the senses or with disease. Before this time, they were usually referred to as "apparitions" (Aleman & Laroi, 2008). We know that the word apparition has long been associated with ghosts and the sensed presence of unseen beings. People who hallucinate unusual, but seemingly real, things may mistakenly interpret their experience as evidence for the existence of ghosts and strange beings. Like illusions, normal hallucinations raise doubts about the quality of personal experience as evidence. Another normal hallucination that often occurs under stressful conditions is the "sensed presence" in which someone senses another person, either living or dead, who is present usually for a limited period of time. Sometimes a sensed presence has served as a "rational" voice to help mountain climbers overcome their fear and survive a catastrophe on the cliffs. Similarly, Charles Lindbergh, the first person to fly nonstop across the Atlantic, came to believe, after hours of sleep deprivation, that there was a presence in his plane guiding him to his destination. At other times, a sensed presence is not helpful. Under the extremely stressful conditions of the 1,000-mile Iditarod dogsled race in Alaska, a competitor named Joe Garnie believed he saw a man riding in his sled. After failing to persuade the presence to get out of the sled, Garnie reported that he swatted at it to get it to leave (Shermer, 2011). Hallucinations, such as the appearance of the man in the dogsled, are internally generated experiences with little input from stimuli in the environment. This suggests that information stored in memory is likely being retrieved and somehow transformed and constructed into experiences that seem real. From this perspective, hallucinations seem like an extreme form of a memory error. As such, hallucinations can serve as the basis of strange experiences that people later remember. For example, a hypnotist may suggest to someone who is especially susceptible to hypnosis that she hears or sees something that is not there. Later, when no longer hypnotized, the person may recall the suggested experience of the hallucination as real. Even outside of hypnosis, a person's background knowledge and perceptual expectations may sometimes dominate mental processing and result in hallucination (Aleman & Vercammen, 2013). Dreamlike states can also produce strange experiences that are later remembered as real. In the cases of recalling alien abduction, discussed earlier, those people likely remembered what was actually an experience of sleep paralysis and the contents of a dreamlike experience that occurs when sleeping and waking cycles fall out of sync (Clancy, 2005). Normally, sleep paralysis prevents us from moving so that we do not act out our irrational dream content, but occasionally people wake up before the sleep paralysis and dream have subsided. They fail to realize that they are still dreaming, which causes them to interpret the experience as actually being paralyzed when aliens draw them up off of their beds and abduct them. These examples suggest the need to examine the contribution of memory and memory errors to our experience and thinking.

Even with only minimal information at your disposal, you may have quickly formed opinions about the mental status of the two people in the opening vignettes, just as I did. How are we able to do this so quickly? The answer is that we use our commonsense or folk theories about abnormal behavior and the availability heuristic to judge whether behaviors are abnormal (Haslam, 2005). Recall from Chapter 11 that heuristics are cognitive shortcuts that underlie rapid Type 1 thinking. If a behavior seems unfamiliar (unavailable), then we may judge it to be abnormal. But judging whether a person has a mental disorder is not so easy. Making the correct judgment requires careful, deliberate thinking based on a larger, more representative sample of a person's behavior, rather than simple snapshots of behavior as provided in the chapter-opening scenarios. Of course, judging whether a behavior is abnormal also depends on which norms (criteria) you use to judge. Are people's informal judgments of abnormal behavior based on the same norms as those used by professional clinicians? This question would be easier to answer if experts agreed on how to distinguish normal from abnormal behavior—but they do not (Bartlett, 2011; McNally, 2011; Smoller, 2012; Szaz, 1974). One expert view is that abnormal behavior differs from average or typical behavior, perhaps based on the folk theory view that abnormal behavior is infrequent. Other experts have argued that using conformance with what is commonly considered to be "normal" as a standard is setting the bar too low (Bartlett, 2011). "Normal" people lie, start fights, bully others, overeat while others starve, and generally do not reach their full potential as human beings. Still others point out that what is considered "normal" is defined differently in different cultures (Smoller, 2012). In China, men sometimes seek help for a condition known as koro, or "shrinking penis," which involves panic and the impression that the penis is shrinking into the abdomen. Koro has been recognized in Chinese traditional medicine for centuries but not in the West. Despite disagreements about how to define abnormal behavior, clinicians generally consider this standard to be important in deciding whether a person suffers from a mental disorder. Table 13.1 contains preliminary questions to help determine the severity of a problem and the possible presence of a mental disorder. If the answer to each question is "yes," then the troubled individual is more likely to have a psychological problem warranting a diagnosis. To help you remember the main word in each of the three questions (maladaptive, abnormal, and distressing), think of the first letter of each— m, a, d—to form the word mad, an outdated term that is still sometimes used to describe someone with a mental disorder. When you think of the word mad, you can decode it into the first three letters of each main word.

Preliminary Questions to Help Decide Whether a Serious Psychological Problem Exists Is the person's behavior: 1. Maladaptive? Does the person's behavior bother other people, interfere with his or her functioning, or get in the way of his or her effectiveness in adaptively responding to stimuli in the environment? 2. Abnormal? Is the behavior very unusual, excessive, and of long duration in ways that cannot be attributed to the context in which it appears? 3. Distressing? Does the person find his or her behavior bothersome, making life unpleasant or troubled?

Scientists apply good theories, such as modern evolutionary theory, to help them make predictions and explain phenomena. A theory is a set of general principles that attempts to explain and predict behavior or other phenomena (Myers & Hansen, 2012). Predictions from a scientific theory tend to be confirmed more often when based on many prior, carefully made observations. No other theory in biology (or in psychology, for that matter) has been so well supported and had so many of its predictions confirmed as modern evolutionary theory (Coyne, 2009). Still, its predictions are not perfect, and it continues to be improved (Gould, 2002). theory A set of general principles that attempts to explain and predict behavior or other phenomena. Isaac Asimov made a similar point when he explained how the theory that the earth is round is superior to the theory that it is flat (Asimov, 1989). The round-earth theory is not completely true: The earth is not perfectly spherical but bulges outward at the equator. However, the round-earth theory is clearly better than the flat-earth theory because predictions made from it are more accurate than those made from the flat-earth theory. Oddly, despite thousands of satellite images revealing the curvature of the earth, some "flat-earthers" still do not accept the better theory. They are demonstrating belief perseverance, the refusal to reject or revise one's faulty belief when confronted with evidence that clearly refutes that belief (Anderson, 2007). Another good example is how "birthers," or people who continue to believe that former U.S. President Barack Obama was not born in the United States (thus making him ineligible to be president), maintained that belief, even after Obama produced his birth certificate showing he was born in Hawaii. belief perseverance The refusal to reject or revise one's faulty belief when confronted with evidence that clearly refutes that belief. That one theory can be shown to be better than another is important because even people who have not studied psychology use their own informal, often inaccurate, commonsense "theories" to explain behavior and other events (Rips & Conrad, 1989). These popular beliefs are not really scientific theories, but they are sometimes called commonsense psychology (Heider, 1958; Myers & Hansen, 2012). For example, common folk wisdom says "opposites attract," suggesting that people who are very different from each other may be attracted to each other. On the other hand, you have probably also heard it said that "birds of a feather flock together," suggesting that people who are more similar will tend to be attracted to each other (Stanovich, 2010). Many popular Internet dating sites endorse the idea that similarity in couples promotes successful relationships. Other people would argue that being with someone who is different from you keeps the relationship interesting. Both sides can't be right. Psychological research, in general, supports the idea that people who are similar are more likely to be attracted to each other (e.g., Byrne, 1971; Lewak, Wakefield, & Briggs, 1985; Wade & Tavris, 1993). Therefore, the idea that "opposites attract" in interpersonal attraction is a misconception to be abandoned in favor of the notion that people tend to be more attracted to people like themselves. commonsense psychology The use of personal, informal, and often inaccurate commonsense or folk "theories" to explain behavior and mental events. Sometimes, however, commonsense psychological ideas are correct, such as the idea that studying more leads to better learning and memory. It does (Baddeley, Eysenck, & Anderson, 2015). Critical thinking and good scientific research can help determine which of your ideas are right and which are wrong. The important thing is that learning how to think critically can help you decide what to think so that you are not stuck with wrong, sometimes dangerous ideas. Unfortunately, considerable evidence suggests that people often do not think critically.

Suppose you are accused of committing a crime and your fate depends on the judgment of 12 jurors. You would hope that the jury carefully and objectively evaluates all the evidence relevant to your case. Unfortunately, Kuhn, Weinstock, and Flaton (1994) found that many of the participants, asked to evaluate the evidence from an actual murder trial, constructed only one story or theory accounting for the evidence in trying to understand the many facts and details of the case. Jury reasoning is an especially demanding task for citizens because they must consider multiple perspectives and coordinate various kinds of evidence with more than one theory. In general, the success of a democracy depends on its citizens making good judgments about the claims made by the media, politicians, and other sources (Glaser, 1985; Paul, 1984). The world of finance provides many examples of CT failures. Many U.S. citizens have massive credit card debt and do not realize that using a credit card is essentially taking out a high-interest loan (Stanovich, 1994). Likewise, the wishful thinking of investors and the uncritical evaluation of loan applications by lenders led to the savings and loan fiasco of the late 1980s, which cost American taxpayers hundreds of billions of dollars. Another example was the Ponzi scheme of Bernard Madoff, discovered in 2008. In a Ponzi scheme, the schemer makes investment promises that are too good to be true, pockets investors' money, and repays earlier investors with the money from new investors (Greenspan, 2009). Madoff was said to have "made off" with $65 billion of investors' money, including that of actor Kevin Bacon, director Steven Spielberg, and many less well-known individuals. The failure to think critically is often associated with thinking errors. Thinking errors are mistakes in judgment and reasoning that take a variety of forms, as when people fail to follow the rules of logic or misunderstand probability in making judgments. Other thinking errors occur when people use a rule that in some cases leads to a good judgment, but in other cases does not. In each chapter, we will examine different thinking errors and how to handle them. Thinking errors and the failure to think critically in general often result in misconceptions. A misconception is a persistent, mistaken idea or belief that is contradicted by established scientific evidence (Taylor & Kowalski, 2004). Psychological misconceptions are sometimes called "myths" about the mind because they are firmly held beliefs about behavior and mental processes that are not supported by psychological research (Lilienfeld, Lynn, Ruscio, & Beyerstein, 2010). Many commonsense psychological theories are misconceptions, such as the idea that the moon causes people to behave abnormally and that opposites attract in romantic relationships. Another common misconception among students is that staying with your first answer on a test is better than changing an answer. About three-fourths of students agreed that changing one's first response would tend to lower the test score (Balance, 1977; Benjamin, Cavell, & Shallenberger, 1984). Research clearly shows this is a misconception and that students are statistically better off changing their answers (Benjamin et al., 1984; Geiger, 1996). Of course, changing an answer does not work if you are simply guessing. Changing it is more effective if one has good reason to think the initial answer might be wrong (Shatz & Best, 1987). Throughout this book, we examine psychological misconceptions like these, so you can revise your incorrect ideas. Sometimes, the failure to think critically about scientific questions leads a person to accept pseudoscientific claims. The prefix pseudo- means "false." Pseudoscience is an approach that makes false claims and masquerades as real science, discussed at length in Chapter 5. A good example of a pseudoscience is astrology; it might appear to be scientific, but it is neither scientific in approach nor supported by scientific research (Crowe, 1990). The horoscopes that astrologers cast appear to be scientific because they require complicated calculations based on the position of the stars and planets at the time of a person's birth, but the predictions astrologers make seldom turn out and are based on ideas inconsistent with the facts of astronomy, a true science. You might say, "Why worry about belief in astrology—it's just a harmless pastime." But what if the president of the United States, the most powerful person on the planet, regularly consulted an astrologer to help make important decisions? According to one of his top advisers, the late President Ronald Reagan and his wife Nancy regularly consulted with their personal astrologer to help decide when to make appearances and important speeches, when to have meetings with world leaders, and even what to discuss at those meetings (Regan, 1988). The Reagans' interest in astrology dated back at least to his time as governor of California. He is reported to have chosen to be sworn in for his first term as governor at just after midnight because an astrological reading said it was a propitious time (Donaldson, 1988). The failure to think critically is related to other mistaken ideas such as superstitions, conspiracy theories, and urban legends that are commonly found in everyday thinking and experience. Another example is belief in paranormal claims such as channeling, the new-age method of contacting spirits. Channeling, like its predecessor spiritualism, is a lucrative business, netting millions of dollars per year for those who serve as channels to communicate with the dead (Kyle, 1995). The confessions of a lifelong medium, M. Lamar Keene, who was on the board of directors of the Universalist Spiritualist Association (the largest of the American spiritualist groups), reveal the extent of fraud involved in such groups. Keene described how a medium with sexual designs on a female client approached her with the promise of special spirit ministrations, and then led her away to a darkened room where he had sexual intercourse with her. Thrilled by the experience, the gullible woman rushed to tell her husband how the spirits had chosen her for this special experience (Leahey & Leahey, 1983). Examples like this suggest the need for CT, but what does the research tell us? Considerable research evidence suggests that students' thinking skills are not adequate to meet the challenges they face. Many educational reports, including the National Assessment of Educational Progress (NAEP), referred to as the "Nation's Report Card," have argued that the American educational system is failing to teach many of its students how to think effectively and that many students could not solve scientific problems (National Center for Educational Statistics, 2003, 2009). Langer and Applebee (1987) found that students often have difficulty in persuasive and analytic writing, two kinds of writing that require CT. Using developmental tests of reasoning, McKinnon and Renner (1971) found that only one-quarter of all first-year college students they tested showed the ability to reason logically and abstractly. Although a college education seems to improve CT somewhat, this improvement, although significant, is not substantial (Keeley, Browne, & Kreutzer, 1982; Pascarella & Terenzini, 1991). Moreover, college education may not help improve CT as much as in previous years (Pascarella & Terenzini, 2005). Similarly, Perkins (1985) found that schooling had some impact on students' abilities to reason about everyday questions, but not as much as we would hope for if students had become proficient at CT. Let's examine what it takes to improve thinking.

Are scientists immune to confirmation bias? No, but they have strategies for countering it, such as the peer review of research. When a scientist submits his or her research to be accepted for publication, an editor sends the manuscript to experts on the question (the scientist's peers) who evaluate the quality of the research and look for problems and ways the researcher's conclusions could be false. The peer-review process is not foolproof, however, and can itself be subject to confirmation bias. Mahoney (1977) asked scientific journal reviewers to evaluate manuscript submissions of studies that were identical except for their results. The reviewers gave higher ratings to the manuscripts with results supporting their own favored theories than to manuscripts with results that challenged their favored views. This, of course, is problematic. Fortunately, peer review still works because others in the scientific community may find fault with a study or will be unable to replicate the findings of the study in question. Probability is another tool scientists use to decide whether relationships are real or illusory. Probability is "the likelihood that a particular event or relation will occur" (Vogt, 1993, p. 178). For instance, to determine how likely a group rated as selfish and a group rated as unselfish will engage in helping behavior, a researcher conducts statistical analyses on some measure of helping behavior. This involves using probability to estimate the likelihood of obtaining some difference between the two groups simply by chance. If the researcher finds a very low probability that the observed difference between the selfish and unselfish groups was due simply to chance (e.g., if the difference would occur by chance fewer than 5 times out of 100), then the researcher concludes it is likely that a real difference in the two groups exists. The researcher declares this difference to be statistically significant, or just significant. This significant difference suggests that a real relationship exists between ratings of people's selfishness and their willingness to help, supporting the hypothesis that unselfish people help more. This also helps reduce the probability that any observed difference in the groups was just a random one that might have been simply observed by chance (as in an illusory correlation).

peer review of research The process in which research submitted for publication is reviewed by experts on the subject, who evaluate the quality of the research and look for problems and ways the researcher's conclusions could be false. probability The likelihood that some event of interest will occur, ranging from 0.00 (not at all) to 1.00 (certain to occur). statistically significant The minimal criterion that a result would occur by chance fewer than 5 times out of 100.

If your response to the first "What Do You Think?" question was "Yes, people are basically selfish," many people would disagree with you. How do you know you are right? Although we may not want to admit it, we are all sometimes wrong. People often have differing views on fundamental questions about human nature and behavior. These differences often depend on a person's perspective in examining the question, which involves making assumptions that guide the approach taken. To understand or see an issue from someone else's vantage point, we must engage in perspective taking, an active process of trying to look at a question from another person's viewpoint, making the same assumptions he or she has made. This practice can help us avoid the problem of missing important information in an argument and can help clarify our own assumptions and perspective. Perspective taking is important when we are seeking and evaluating evidence because critical thinking (CT) involves evaluating evidence from all sides of an argument—not just the evidence that favors our own view. Unfortunately, people often show myside bias, a thinking error similar to confirmation bias in which think ing is one-sided, thereby neglecting the information and evidence that support the other side of a question (Baron, 1995; Stanovich, West, & Toplak, 2013). Confirmation bias often goes one step further, however, in that information for the other side may be disparaged or given less weight, in which case the information is considered, but inadequately. Nevertheless, both myside bias and confirmation bias involve a failure or at least a reluctance to adopt the perspective of another person and consider other views. To avoid myside bias, we must examine the evidence on all sides of the question, including both the evidence that supports altruism and the evidence that supports egoism. A useful strategy for broadening one's understanding and perspective on a complex question is to consider what fields other than one's own might have to offer. The differing perspectives and approaches used in these fields provide information about the various aspects of a complex question and about systems that operate at different levels. For example, the field of philosophy has a lot to say about the second "What Do You Think?" question: "In which ways is selfishness beneficial? In which ways is it harmful? Are you always helpful?" This is a "value" question, posed from an ethical perspective. The usual position on this question is that helping is the right thing to do, and a person has an ethical obligation to help others in need. Many philosophers and supporters of religion have extolled the virtues of altruism, epitomized in the Christian story of the good Samaritan who stops on the road to selflessly help the victim of a robbery—one who is not even from his own group. Other philosophers have argued that enlightened self-interest is the best approach. For instance, Ayn Rand, the Russian-born philosopher and author of the novels The Fountainhead and Atlas Shrugged, argued that selfishness is a virtue. According to Rand, when people are altruistic, they sacrifice their own individuality and achieve less than when they promote their own self-interest. Rand further argued that society's emphasis on altruism saps the initiative of individuals, reduces their self-esteem, and generally works against their welfare. She proposed that people should pursue their own interests and goals while not obstructing the interests of others (Rand & Branden, 1964). Philosophers propose ethical principles about how people "should" behave and make arguments about what they "should" value. In contrast, although many psychologists assume that helping is a good thing, they are more concerned with how people actually behave (Batson, 2011). Psychologists are more likely to observe the extent to which people are influenced by and follow normative rules for helping. Indeed, research shows that people are influenced by the social norm that one should help someone in need, regardless of whether one will receive something in return (Berkowitz, 1972). From a psychological perspective, then, the question becomes "What are the conditions that produce or motivate helping behavior or the lack of it?" The field of psychology has developed into specialized subfields, often influenced by and borrowing from biology and other fields. Each subfield—such as social psychology, biopsychology, and developmental psychology—offers a different perspective and approach to the complex question of whether people are basically selfish. For example, social psychologists might investigate whether the man lying on the ground in Figure 10.1 would be more likely to be helped if several people saw him, as opposed to if just one person encountered the man. Social psychologists often seek to understand the motives and personality traits of individuals who help others. Taking a different perspective, developmental psychologists might examine whether people of different ages would be more likely to help, such as very young children versus adults.

perspective taking An active process of trying to look at a question from another person's viewpoint. myside bias A thinking error similar to confirmation bias in which thinking is one-sided and neglectful of the information and evidence that support the other side of a question.

The problem is that, for instance, males and females might already differ from each other at the beginning of our study on a number of variables related to their willingness to help. Because we have merely selected males and females and cannot randomly assign participants to be male and female in a quasi-experiment, we are unable to control differences in our subjects that could be controlled through random assignment in a true experiment. Moreover, without truly manipulating an independent variable, we cannot establish time order; so we are not able to draw causal inferences from quasi-experiments, as we can with true experiments. Like true experiments, quasi-experiments can sometimes allow for control of extraneous variables—as when we test our groups under similar conditions in the laboratory—but possible preexisting differences in participants related to the sex variable remain uncontrolled. In summary, the manipulation of independent variables allows the experimenter to meet the criteria of covariation and time order, and the control of extraneous variables allows for meeting the criterion of the elimination of plausible alternatives. At least with regard to making a causal inference, therefore, the experimental method provides better-quality data than the case study, correlational study, or quasi-experimental study. Table 4.3 summarizes the strengths and weaknesses of the various research designs we have discussed, with implications for the quality of data and evidence each provides. Nevertheless, the conclusions based on scientific research are only as good as the quality of the evidence they are based on. Therefore, if a scientist did a research study and was not really measuring what was intended or perhaps made errors in measurement, then conclusions based on that research data could be erroneous. Fortunately, science is self-correcting, and the erroneous conclusion of the first scientist could be discovered by other scientists seeking to replicate and make sense of the observations of the first research study. Table 4.3 also implies that we should be particularly persuaded when high-quality scientific research studies are used as evidence. When using scientific research as evidence in arguments, authors often cite the source author(s) and year of publication and mention the kind of research study that was done. Table 4.3 makes it clear that results of certain types of studies, such as true experiments, generally provide stronger support for a claim than do other types, such as case studies or other nonexperimental study designs. Scientific research is also used to support claims when a scientific authority or expert is cited, often someone who has written a literature review, summarizing the results of several studies supporting some hypothesis or theory. These two citation methods demonstrate good practices in using scientific research as evidence, but we often hear arguments in the media and everyday life that do not specifically cite the study or research being discussed. For example, news reports will say, "Research shows . . ." or "Studies show . . ." without documenting the scientific research evidence being referred to. Although they have made a basic argument, their failing to cite specific research has weakened the argument. It also discourages critical thinking (CT) because it is harder to examine the quality of the evidence when no source is cited.

quasi-experiments A type of experiment in which there is no true manipulation of an independent variable through random assignment to treatment conditions.

An Extended Inductive Argument on Whether Instruction Can Improve Critical Thinking

1. A large study of a thinking skills program for schools in Venezuela showed that students improved their abilities to present oral arguments and to answer open-ended essay questions (Herrnstein, Nickerson, de Sanchez, & Swets, 1986). 2. Children instructed in a program called Philosophy for Children have shown improvement in their reasoning (Institute for the Advancement of Philosophy for Children, n.d.). 3. After instruction in a thinking-skills program, students self-reported that their thinking improved (Dansereau et al., 1979). 4. Some studies found that specific instructional variables thought to be related to CT have produced better test performance on the Watson-Glaser Critical Thinking Appraisal (Bailey, 1979; Suksringarm, 1976) and on the Cornell Test of Critical Thinking (Nieto & Saiz, 2008; Solon, 2007). 5. Studies of specific instructional variables thought to be related to CT have shown no increase on the Watson-Glaser test (Beckman, 1956; Coscarelli & Schwen, 1979). 6. After instruction in thinking skills, students have shown significant gains on tests of cognitive growth and development (Fox, Marsh, & Crandall, 1983). 7. Specific programs designed to teach CT for college students have produced significant gains on tests of CT (Mentkowski & Strait, 1983; Tomlinson-Keasey & Eisert, 1977). 8. Specific programs for teaching CT in college showed that students made no significant gains on CT tests (Tomlinson-Keasey, Williams, & Eisert, 1977). 9. Studies of high school students' reasoning skills (Marin & Halpern, 2010) and of college students who received explicit CT instruction (e.g., Bensley, Crowe, Bernhardt, Buckner, & Allman, 2010; Bensley & Haynes, 1995; Bensley & Spero, 2014; Nieto & Saiz, 2008; Solon, 2007) have found that students who received explicit instruction showed more improvement in their CT than did students who did not receive explicit CT instruction. 10. Authors of literature reviews on teaching CT have concluded that explicit CT instruction is effective (e.g., Bensley, 2011; Halpern, 1993, 1998); explicit instruction shows the largest effect size (Abrami et al., 2008).

Summary of Thinking and Argumentation Errors

Asserting the consequent A fallacy in conditional deductive reasoning that typically occurs when the "then" part is asserted in the second premise Assert the antecedent or the "if" part of the major premise in specific terms in the second premise. Unwarranted assumption Taking for granted that a premise or statement is true that has not been justified or supported by a reason Find assumptions and make them explicit so that they can be examined and tested; make sure all are supported. Shifting the burden of proof Shifting responsibility to the other side to show that your position is wrong when you have not yet provided sufficient evidence to support your side Assume responsibility for providing support for your unsupported claim to avoid making an argument from ignorance. Circular reasoning Reasoning to a conclusion that is simply a restatement of the information in the premises Offer reasons that actually support and do not simply restate the conclusion.

INDUCTIVE REASONING IN SCIENCE

In both the case of everyday arguments and scientific reasoning, induction leads to uncertain conclusions, but induction in science has the important advantage of being able to manage the limits of inductive reasoning. Unlike personal experience and informal observation commonly used in everyday reasoning, scientists make careful, systematic observations. Scientists are also more careful in their reasoning about such observations. Why do people believe in ESP if the scientific evidence fails to support it? One reason is personal experience. Glickson (1990) found that people who have had personal experiences involving the paranormal believed more strongly in ESP than those who did not have such experiences. Another reason is the media's uncritical, even favorable coverage of so-called psychic events, as in the case of the Lee Fried-Duke story. This unbalanced coverage can be problematic because some people accept media sources as credible and authoritative. Television and movies can make these psychic events seem even more real and authentic (Hill, 2011). As we have seen with the Lee Fried example, plausibility provides a good initial standard for evaluating claims, such as the likelihood that someone really has precognition. But this question can be addressed better through more direct scientific research that investigates claims of ESP. Recently, Bem (2011) provided experimental evidence of precognition, using a new backward prediction technique—but the quality of the studies that produced these findings has been severely criticized (Alcock, 2011), and Bem's original findings were not replicated in a subsequent study (Galak, LeBoeuf, Nelson, & Simmons, 2012). Also, some parapsychologists have used fakery and fraudulent methods in studies that support ESP's existence. In general, better-quality research has not supported the existence of ESP (Alcock, 2011; Milton & Wiseman, 1999).

Common Essay Question Types and Related Prompts Let's illustrate how the prompts in Table 14.3 might signal different types of writing responses for questions about dyslexia. Dyslexia is characterized as a problem in learning how to read fluently. Suppose a question asked, "What is the definition of dyslexia?" The what prompt and the word definition signal that this is a knowledge question that could be mostly answered by listing the symptoms of dyslexia—such as slow reading in a person with no other deficits in basic cognitive skills and intelligence, but often with other problems, such as poorer-than-expected spelling, difficulty sounding out the sounds that make up words, and, to a lesser extent, reversal of letters. Now suppose a different question asked, "Compare and contrast dyslexia with expressive language disorder." This analysis question requests a breakdown of similarities and differences in the two disorders. Suppose another question asked, "Decide whether reversing letters is the principal sign of dyslexia." To adequately answer this evaluation question, you would have to do much more than simply recall and understand the facts about dyslexia and other communication problems. You would need to evaluate the evidence that agrees with as well as the evidence that disagrees with the claim to decide which position is better supported—that is, examine the arguments and counterarguments. Other evaluation prompts such as justify, defend, or argue for imply the goal of marshaling the evidence that supports a particular side of the argument.

Knowledge Asks what you remember pertaining to the question List, Describe, Name, Define Identify, Who, What, When. Comprehension Asks you to show you understand the relevant terms and concepts you remember, using your own words Paraphrase, Summarize, Explain, Review, Discuss, Interpret, How, Why. Application Use your knowledge to go beyond simple recall and understanding to solve a problem or make something Apply, Construct, Simulate, Employ, Predict, Show how. Analysis Break something down into its component parts so that it can be understood Classify, Distinguish, Differentiate, Compare, Contrast, Categorize,Break down. Synthesis Bring together different knowledge or concepts in a unified response Combine, Relate, Put together, Integrate. Evaluation Judge whether something is good or bad, true or false, or reaches some criterion Judge, Argue, Assess, Appraise, Decide, Defend, Debate, Evaluate, Choose, Justify.

sweeping generalization A conclusion that is too broad or goes beyond an appropriate conclusion based on the evidence presented. Table 3.3 summarizes the various thinking errors discussed thus far. But a final note of caution related to the limits of induction is that the scientific research literature shows a "publication bias" about which studies get published. Scientists look for real relationships between variables that exist, not for effects that do not exist. Consequently, failures to show an effect often do not get published, and literature reviews will be biased to contain studies in which an effect was found. This could raise doubt about the existence of negative evidence or evidence not supporting an effect, such as evidence not supporting the conclusion that CT can be taught.

Naive realism Believing that what you perceive is really what is there and that what you experience is necessarily very accurate Look for objective verification of what you perceive, such as from other observers or from data gathered with scientific instrumentation. Hasty generalization Drawing a general conclusion before sufficiently considering all the relevant evidence Look for all of the relevant evidence and determine whether the samples are representative and adequate. Sweeping generalization Drawing a conclusion that is too broad or that goes well beyond what the evidence implies; sometimes, this involves overinterpreting a study's findings Look at the range within which most of the cases and evidence are consistent with the conclusion and do not extend the conclusion beyond those cases to a different situation or group. Red herring fallacy Deliberately sidetracking an argument away from the issue at hand by presenting irrelevant information Follow the line of reasoning; when irrelevant information is presented, point this out and ask the person to return to the issue.

web-based Information that is based on the Internet, such as databases delivered via the Internet and updated and maintained by professional scholars and librarians; it tends to be more reliable and of higher quality than web-placed information. web-placed Information placed on various websites for differing purposes, typically by individuals, groups, or organizations to promote their own ideas, products, and/or services.

Other web resources are web-placed—that is, individuals and groups place information at various websites for differing purposes. These resources are maintained by companies, social organizations, individuals, academic institutions, and other parties, who often post the information to promote their own ideas, products, and services. Because web-placed sites tend to vary more in quality and reliability than academic databases, they place greater demands on users' CT skills. Using a web-based site like PsycINFO does not eliminate the need to use CT to assess the studies found at that site. In fact, even some indexed research is of lower quality than other research. However, accessing web-placed information requires particular care in your analysis because the source may not even use scientific research evidence to support its claims. Specifically, the source may rely on lower-quality evidence such as commonsense belief, statements of authority, anecdotes, and testimonials. How should you evaluate the information found at websites that vary so broadly in terms of their quality and accuracy?

thinking errors Mistakes in judgment, decision making, and reasoning, including cognitive errors, logical fallacies, and incorrect use of rules for estimating probabilities.

psychological misconceptions Firmly held, commonsense beliefs about behavior and mental processes that are not supported by psychological research.

Notice that the first fundamental question in Table 13.3 asks whether the person is showing signs of psychosis. Psychosis refers to a serious thought disturbance, suggesting disconnection from the conventional experience of reality. It typically manifests in the form of hallucinations and delusions. The hallucinations experienced by people with schizophrenia often involve hearing voices that are not there. Delusions are fixed, false beliefs that are readily contradicted, such as the false belief that one is being observed, followed, or persecuted by some malevolent entity. All the disorders within the schizophrenia spectrum and psychotic disorders group list psychotic features as a main criterion for diagnosis. Notice also that criteria 4 and 5 in Table 13.3 help rule out depressive and communication disorders, respectively. Sometimes, a client fits the criteria for more than one disorder, making it difficult for a clinician to rule out one disorder or the other, and suggesting that diagnosis of more than one disorder is indicated—a situation called comorbidity. For example, the most common secondary diagnosis to major depression is substance abuse disorder. Schizophrenia and depression often occur together as well (Buckley, Miller, Lehrer, & Castle, 2009). The DSM-5 contains comorbidity information for most disorders (APA, 2013). To further appreciate the complexity of diagnosis and to illustrate the two major diagnostic categories listed in Tables 13.2 and 13.3, let's examine an actual case of depression complicated by other symptoms. Andrea Yates (Figure 13.3) is notorious for having murdered all five of her children in 2001. Yates had previously attempted suicide more than once and was repeatedly hospitalized for depression beginning in 1999 (O'Malley, 2004). She was diagnosed with major depression with severe psychotic features that same year, ruling out schizophrenia. Following the birth of her fifth child at the end of November 2000 and the death of her father soon thereafter, she was hospitalized again and diagnosed with postpartum depression and recurrent major depression at the end of March 2001. In April 2001, Yates's new psychiatrist, Mohammad Saeed, diagnosed her with major depression with psychotic features. The next month, Yates was hospitalized again, diagnosed with severe postpartum depression, and given Haldol, a drug to relieve her psychotic symptoms. In June 2001, Saeed ordered that she discontinue Haldol and did not prescribe any other antipsychotic drug as a replacement, but he kept her on Remeron, an antidepressant. On June 20, 2001, Yates drowned all five of her children, one by one, in the bathtub of her Houston, Texas, home. Yates was charged with first-degree murder, and her defense team decided to plead "not guilty by reason of insanity." In Texas, as in many other states, a person is considered not guilty by reason of insanity if that person can demonstrate he or she did not know the difference between right and wrong at the time of the crime. Appointed as an expert witness for the defense, forensic psychiatrist Dr. Phillip Resnick interviewed Yates in jail three weeks after the murders. During the interview, Yates said she believed that Satan was inside of her and that she heard him tell her in a growling voice to kill her son Noah (O'Malley, 2004). She believed she was failing as a mother, that her children were not developing the way they should be in an academic and righteous sense, and that "maybe in their innocent years God would take them up." In other words, Yates believed that she could save her children from hell by killing them before they strayed further from the righteous path and that God would allow them into heaven (O'Malley, 2004, p. 152). She also believed that after killing her children, she would be executed and then Satan would die with her. The expert psychiatrists consulting on her trial agreed that Yates had suffered a psychotic episode, but they disagreed about her specific diagnosis. Although psychosis is often associated with schizophrenia, it is also present in approximately 20% of depression cases. Lucy Puryear, another witness for the defense, thought Yates suffered from schizophrenia, while Resnick thought she had schizoaffective disorder, a mental disorder included in the same category as schizophrenia, in which the individual suffers from severe depression or mania at the same time as psychosis. Yet Yates's psychiatrist had taken her off of antipsychotics and prescribed Remeron, a powerful antidepressant, two weeks before the murders. Another view was that Yates had bipolar disorder. Although sometimes she was catatonic, hardly speaking or moving at all, at other times she showed great surges of energy as compared to her depressed state. According to psychiatrist Deborah Sichel, Yates's competitive swimming in high school might be one example of her behavior while in a manic state—her husband said Yates once even swam around an entire island (O'Malley, 2004). If she had been misdiagnosed and still had psychosis with bipolar disorder, then discontinuing her antipsychotic medication could have led to greater disconnection from reality, and prescribing Remeron could have pushed her into a manic phase.

psychosis A serious thought disturbance indicating a disconnection from conventional reality. comorbidity A situation in which a client fits the criteria for more than one disorder, suggesting diagnosis of more than one disorder in the same person.

Strengths and Weaknesses of Nonscientific Sources and Kinds of Evidence

Personal experience Reports of one's own experience (first person) often in the form of testimonials and introspective self-reports Tells what a person may have been feeling, experiencing, or aware of at the time Is often compelling, vivid, and easily identified with Is subjective, often biased, and prone to perceptual, memory, and other cognitive errors May be unreliable because people are often unaware of the real reasons for their behaviors and experiences Anecdote Story or example, often biographical, used to support a claim (third person) Can vividly illustrate an ability, trait, behavior, or situation Provides a real-world example Is not based on careful, systematic observation May be unique, not repeatable, and not generalizable to many people Commonsense belief Informal beliefs and folk theories commonly assumed to be true that are used as evidence Is a view shared by many, not just a few people Is familiar and appeals to everyday experience Is not based on careful, systematic observation May be biased by cultural and social influences Often goes untested Statement of authority Statement made by a person or group assumed to have special knowledge or expertise Can provide good support when the authority has relevant knowledge or expertise Is convenient because acquiring one's own knowledge and expertise takes a lot of time Is misleading when a presumed authority lacks relevant knowledge and expertise or only pretends to have it May be biased by personal experience and beliefs

Descriptions of Clinical Thinking Errors and Recommendations for Correcting Them

Reification Assuming that a hypothetical construct, such as a mental disorder, is a real, concrete entity Realize that a mental disorder is an operationally defined and created construct. Mistaking diagnosis for a cause Thinking that a diagnosis is a cause of behavior or a mental problem Realize that a diagnosis is a label for a classification. Forer or Barnum effect Assuming that a general description of a person is predictive and diagnostic Use more detailed, specific information about signs, symptoms, and behaviors. Rapid diagnosis (hasty generalization) Quickly diagnosing a mental disorder without considering relevant criteria and all relevant signs and symptoms Conduct a more thorough assessment and a differential diagnosis. Excessive backward reasoning Relying too much on preconceptions, expectations, and stereotypes to guide reasoning Engage in more forward reasoning, paying more attention to the data. Behavioral confirmation (self-fulfilling prophesy) Eliciting certain behaviors from a client that confirm the clinician's expectations about the client Remain open to other interpretations of a client's behavior and use objective measures to assess the behavior.

You may have heard the expression, "Correlation does not imply causation." Inferring that one of two simply correlated variables is a cause of the other is a thinking error called confusing correlation with causation. A good example, discussed in Chapter 2, is the misconception in the 1980s that improving self-esteem would improve academic performance. The many attempts to improve students' academic performance by raising their self-esteem were largely unsuccessful. Although self-esteem is modestly correlated with academic achievement, it does not cause it (Baumeister, Campbell, Krueger, & Vohs, 2003). Those who perform better in school are simply more likely to feel better about themselves. Another misconception related to confusing correlation with causation is the popular belief that victims of sexual abuse will necessarily develop personality problems in adulthood and will become abusers themselves. Although sexual abuse is indeed too common and can be very harmful, the research generally does not support that it causes people to develop a specific set of personality issues, such as low self-confidence and problems with intimacy and relationships, that victims carry for the rest of their lives (Lilienfeld, Lynn, Ruscio, & Beyerstein, 2010). Rather, the research shows that abused people are generally resilient and able to adjust to the early trauma. In a meta-analysis of many studies of college students, Rind, Bauserman, and Tromovitch (1998) found that although the students' experience of sexual abuse was related to some psychological problems later in life, the correlations were low. Moreover, research by Salter and colleagues (2003) found that less than 12% of men who had been sexually abused as children later became abusers themselves (Salter et al., 2003). When compared with the approximately 5% of men who were not abused but who later committed sexual abuse, the frequency for sexual abuse victims is certainly higher; but it also means that about 88% of sexual abuse victims do not become abusers themselves. Salter and colleagues also found that other risk factors were often present in the abusers as children, such as a lack of supervision and having witnessed serious violence among family members, which may have caused the later abusive behavior. This further suggests that although a correlation is present, causation should not be inferred. Another kind of thinking error about causation, called post hoc reasoning, occurs when people incorrectly assume that something that merely happened to occur before an event was the actual cause of the event. The English translation of post hoc is "after this"; it comes from the longer Latin expression post hoc, ergo propter hoc, which means "after this, therefore because of this." Both expressions refer to making an unwarranted assumption about time order, specifically that something occurring after this other event was caused by it. To illustrate, suppose you begin taking vitamins and later notice that your concentration is better than before you started taking them. From this, you may mistakenly conclude that the vitamins caused the improvement in your concentration. Looking back at the events, it may seem that taking the vitamin was the first event and led to better concentration, but the two events may be simply coincidental, and the criterion of time order has not been established. Nor have you met the other criterion of eliminating plausible, alternative explanations. You may have expected the vitamins to improve your well-being in general (a placebo effect, discussed in Chapter 5), and you likely did not establish a controlled variable against which to measure any effects. Or what if, during this time, you also exercised more or got more sleep? These could be the actual causes of the perceived change in your concentration. What is needed is a method that allows us to manipulate one variable so that it clearly occurs before the other variable while we also control other potential causes.

Summary of Thinking Errors Highlighted Illusory correlation Perceiving a correlation or association between two things when no correlation or association exists Pay attention to cells B and C in a fourfold table, because people tend to focus mostly on cell A. Confirmation bias The tendency to attend to, seek, and give more weight to evidence that supports one's favored position rather than evidence that could disconfirm it Consider the opposite or an alternate position—look for evidence that could disconfirm one's favored position. Post hoc ("after this, therefore because of this") reasoning Concluding after an event occurs that something that happened before it was the actual cause of the event Don't assume that some action or situation that preceded another event was the actual cause of it; conduct a well-controlled experiment to see whether manipulating the first variable actually causes changes in the second. Confusing correlation with causation Believing that a variable that is simply covarying or correlated with another variable is its cause; or, less commonly, failing to infer causation from results of a well-controlled true experiment Make sure the action or situation thought to be the cause of an event actually occurred first; look for other possible causes and see if they can be eliminated; conduct a true experiment and don't be fooled into thinking that correlations or quasi-experiments can show causation.

An individual's position on the M-B question can greatly affect his or her approach to understanding the world. M-B dualism can even be a dangerous position, as evidenced by the mass suicide of 39 members of the Heaven's Gate cult who, as they put it, "exited the vehicle" (i.e., killed the body) so that their spirits could be free to rendezvous with aliens in a spaceship approaching Earth (see Chapter 4). In this extreme dualistic view, the body is merely a conveyance for carrying the spirit around. If the spirit is assumed to be separable and the most important part of a person, then sacrificing the body does not seem to be much of a sacrifice at all. Your own stance on the M-B question can influence which theory you endorse and which approach you take to psychological questions. For example, many scientists studying the brain who are physicalists may be less inclined to examine subjective experience and more inclined to focus on the brain. If they neglect subjective mental states, their explanation of the mind and brain may be incomplete. Sometimes lawyers take extreme materialistic positions in defense of their clients, arguing as an excuse for a criminal's behavior that "his brain made him do it." As evidence, they may present images of damage in the client's brain or show a scan of abnormal activity in a brain area, comparing it with the scan of a normal brain (Thornton, 2011). This extreme materialistic approach is wrong-headed, in that it assumes that we are only our brains. The mind and the brain are closely related—they are part of the same person. Saying that a person's "brain made him do it" ignores the interconnected nature of the brain and mental processes that emerge as the brain operates in a complex environment—one in which some environmental events are simply random. Locating a brain area that becomes activated when a person behaves a certain way may seem to demonstrate the cause of the behavior, but this assumption is unwarranted. Brain-scanning technology is not yet able to predict who will commit a crime, and events often have multiple causes. In contrast, radical behaviorism is a materialistic approach assuming that mental processes are unimportant. John Watson, the founder of behaviorism, argued that psychologists should not study mental processes because the mind is not directly observable. Instead, psychologists should study the relationships between stimuli and responses, which are observable. One consequence of the dominance of behaviorism in U.S. psychology was that many psychologists were discouraged for decades from studying memories, mental imagery, emotion, consciousness, and other mental events. Finally, the assumptions that some M-B dualists make can prevent the scientific study of the relationship between mind and brain. Suppose someone believes that the mind is nonphysical and the brain is physical; then what kind of observation could show the action of the mind? In psychology, we make observations of external, physical events as indicators of the action of the mind. How can we do this if the mind is nonphysical?

The Out-of-Body Experience At least initially, the OBE would seem to provide good evidence that the mind can separate from the body. We should note, however, that the term out-of-body experience is neutral (i.e., not definitive) with respect to whether the experiencing self actually leaves the body. This term simply asserts that a person has had the experience or impression of being outside of the body. OBEs do not seem to be associated with any psychological disorder (Tabacyk & Mitchell, 1987), although those who have OBEs tend to experience hallucinations more (Parra, 2009). In fact, OBEs are fairly common in the general population, with estimates of their reported incidence ranging from approximately 10 to 20%, depending on the survey (Rogo, 1984). In a review of the literature, Alvarado (2000) found that, on average, 25% of college students reported having experienced at least one OBE. Thus, the issue is not whether OBEs occur, but whether the mind or consciousness actually leaves the body as it appears to do. The standard, waking OBE occurs as a spontaneous experience in which consciousness or the experiencing part of a person is perceived as located at a point outside of the physical body. OBEs may also occur as a part of other experiences, such as religious, drug-induced, near-death, meditational, or hypnotically induced experiences, as well as during dreams (Grosso, 1976). The fact that OBEs occur relatively often, to so many different people and under such different conditions, could suggest that they are due to an unusual type of brain functioning. A famous incident of OBE was reported in 1903 by Mr. S. R. Wilmot, a British man taking an ocean voyage (Blackmore, 1992). In the cabin of their ship en route to New York, Wilmot and his roommate observed Wilmot's wife, who had stayed behind in Liverpool. Later, when reunited with her husband and apparently without prompting, Mrs. Wilmot asked him if he had received a "visit" from her. Mrs. Wilmot was reported to have accurately described the appearance of his cabin, even though she had not actually seen it—that is, unless she had gone on an out-of-body excursion. OBEs often occur as part of near-death experiences, such as when a person undergoing surgery "flatlines" briefly; people have offered these reports as evidence that the mind actually leaves the body. Dr. Raymond Moody, a physician, conducted interviews with numerous people who described observing their body from above when medical personnel were working on them as they lay unconscious and near death. In many cases, the patients reported entering a dark tunnel as they moved toward death (Moody, 1976).

Some people's willingness to consistently help others without concern for getting something in return may indicate a stable individual difference in personality in those who tend to behave altruistically, compared with those who tend to behave more selfishly. To test this hypothesis, Romer, Gruder, and Lizardo (1986) conducted a study in which they first asked subjects to complete a personality test that classified them as altruistic, receptive-giving, or selfish. Altruistic subjects indicated that typically they were helpful to others with no expectation of something in return; receptive-giving subjects would help others when they got something in return; and selfish subjects wanted help from others but were not interested in giving any help. This initial testing suggested that some people tend to be basically selfish, others are receptive-giving, and still others are altruistic. In a second experimental phase of the same study, Romer and colleagues asked participants if they would be willing to help a graduate student experimenter complete her research project before the end of the semester. Participants in the three personality classifications were randomly assigned to one of two conditions—half of the participants in each group were promised course credit for further participation, while the other half were not promised any reward. Do you think getting a reward affected the groups differently in terms of how much they helped? Figure 10.2 shows the results in what is called a person-by-situation interaction. As shown in the figure, whether subjects volunteered depended on both the trait of the person (altruistic, selfish, or receptive-giving) and on details of the situation (whether they were offered a reward).

The event of interest could be almost anything, but let's say it is getting heads in the toss of a fair coin. The chance of getting heads on a particular fair coin toss is 1 out of 2, because there is only one way to get heads (the event of interest) in the two possible outcomes of a coin toss (heads or tails). Probabilities can be expressed in various ways. We may express the chance of getting heads as 1 out of 2 or as the fraction, ½. In gambling, the probability of getting heads versus tails might be expressed as the odds being 50-50. We more often express probabilities as a decimal proportion—for example, the probability of getting heads is .50 or p = .50. The decimal scale used to express the probability of an outcome ranges from 0.00, interpreted as "The event will not occur," to 1.00, interpreted as "The event is certain to occur." Expressing the probability of getting heads as a percentage, we are 50% certain of getting heads or, stated another way, we expect to get heads 50% of the time. Necessarily, we are 100% - 50% = 50% uncertain that we will get heads. The probability of getting heads, p = .50, states the overall outcome we expect to obtain over the long run when we repeatedly toss a coin an infinite number of times—that is, in the population of all possible coin tosses. A population is all the possible observations on some variable (e.g., coin tosses), or some behavior or characteristic, that could be made. In real-world situations, a variable in a population often has so many possible values that it is impractical to observe them all. Consequently, we can only estimate the population's characteristics. To infer what the population is like, we observe or test a smaller subset of the population, called a sample, and estimate the characteristics of the population based on the characteristics of the sample. Complicating our estimation is the fact that the values obtained from samples may vary. Repeatedly tossing a fair coin with only six tosses per set or sample will likely yield results that vary from the 50% heads (H) and 50% tails (T) expected over the long run. We might get T-H-T-H-T-T, T-H-H-H-H-T, H-T-T-T-T-T, H-H-T-T-H-H, or even occasionally H-H-H-H-H-H. The values that the samples take on tend to vary from sample to sample, a phenomenon called sampling variability.

cognitive errors Errors the mind makes as it processes information (e.g., perceptual, attentional, and memory errors).

The failure of the train engineer to see the red stop signal, the mistaken red panda sightings, and the false memories of alien abduction are all examples of cognitive errors, or errors the mind makes as it processes information. Cognitive errors can occur as we attend to a stimulus, perceive it, and store information about it in memory, as well as when we retrieve information for use in reasoning, solving problems, and making decisions. Because reasoning is a cognitive activity that depends on attention, perception, and memory, problems associated with each of these cognitive processes can contribute to thinking errors. First, not paying enough attention to a stimulus, as when the train engineer was texting instead of attending to his driving, impedes our ability to sufficiently process or even see that stimulus. Missing the red light apparently led the engineer to the false conclusion that it was safe to enter the tunnel. Second, the red panda sightings demonstrate that background knowledge and expectation can affect both what we perceive and what we remember; thus, people who expected to see a red panda incorrectly interpreted the sighting of a cat or some other similar-looking animal as the red panda. Finally, the people who recalled alien abduction probably misinterpreted experiences associated with sleeping, such as sleep paralysis and dreaming. Exposure to stories of alien abduction and images of aliens in the media probably contributed to their memory of such unusual experiences. To think critically, we must avoid making not only logical errors, but also cognitive errors that can derail the reasoning process. Serious consequences can occur when we reason with inaccurate information obtained from perception or memory, as when an eyewitness inaccurately recalls what she saw at the scene of a crime. These errors can lead to wrongful convictions because jurors often find eyewitness testimony and identifications highly persuasive. As of June 2016, the Innocence Project had helped overturn 342 wrongful convictions; approximately 70% of those convictions were at least in part due to incorrect eyewitness testimony and identifications that resulted in innocent people spending years in prison (Innocence Project, 2017). In this chapter, we will examine cognitive errors associated with attention, perception, and memory in depth and then discuss how each can affect eyewitness memory.

population All the possible observations on some variable, behavior, or characteristic that could be made. sample A subset of a population used to represent that population. Probability of an event: number of ways an event of interest an event could occur/ total number of outcomes that could occur in that situation regression toward the mean A pattern of results in which the results of a test, when measured again, tend to be less extreme, or closer to the mean. heuristics Rules of thumb that serve as cognitive shortcuts to simplify judgment and decision making but that do not guarantee a good judgment or decision. magical thinking A kind of thinking that makes supernatural and paranormal assumptions about the workings of the world, especially attributing paranormal powers to oneself or to others (e.g., apparent mental causation). inductive reasoning A type of reasoning in which one often argues from a specific case to a general principle, such as a theory or hypothesis; a generalization from bits of evidence. law of contagion The principle, based on sympathetic magic, that things that come into contact may change each other for a period of time, even after they are no longer in contact.

The idea of superstition is very old and has been associated with religion in various ways. The word superstition comes from the Latin word superstitio, meaning to "stand over in amazement," as with religious awe. Some superstitions even have their origins in specific religious traditions. For example, some view the number 13 and the date Friday the 13th as unlucky because 13 people (Jesus and his 12 disciples) attended the "last supper" before the Friday of Jesus's crucifixion. Sometimes, religious believers have viewed the beliefs of other religious groups as superstitious. For instance, the Catholic Church has for many years objected to superstition as adhering to magical and other practices that deny God's divine providence. In the fifteenth century, when the Protestant reformer Martin Luther rejected certain Roman Catholic practices, he said the office of the pope was a source of superstition. Ironically, Luther also believed in witches who had supernatural powers—a belief that is now considered a superstition. Individuals who believe in magic may assume that certain practices and rituals can harness supernatural forces to achieve seemingly impossible feats, even controlling the forces of nature. People may also view magic as a means to counteract evil forces and the magical powers of witches and others. The belief in witches is quite old. The Hebrew Bible and the New Testament both mention witches, for example. In the late Middle Ages in Europe, fears of their supernatural powers led to the execution of many people, often women, for witchcraft. Using magic, these so-called witches were said to fly through the air and cause harmful events, such as disease, pestilence, and storms. During the Inquisition in Europe, when religious and political institutions collaborated to prosecute and execute people accused of witchcraft, between 60,000 and 600,000 people were sentenced to death by hanging, being burned at the stake, or in some other way for this supposed crime. With the dawn of the Scientific Revolution in the 1600s, science has increasingly been viewed as a way to harness the powers of nature without resorting to supernatural explanations such as witchcraft (Wootton, 2016). In fact, magic and superstition themselves became the subjects of scientific investigation. In the nineteenth century, scientists identified an important kind of magic, called sympathetic magic, found in many cultures (Frazer, 1996; Tylor, 1974). Two fundamental laws of sympathetic magic are the law of similarity and the law of contagion. According to the law of similarity, things that resemble each other share important properties, captured in the expression "like goes with like." An example is the Haitian ritual in which a practitioner of voodoo burns a doll that looks like the intended victim in an attempt to harm that person. According to the law of contagion, things that come into contact may change each other for a period of time, even after they are no longer in contact. For instance, a person might not want to handle an item that belonged to an evil individual, fearing that some evil essence may linger in the object and be transferred to the "uncontaminated" person. Today, many scientific-minded people reject as superstition any magical and supernatural explanations of events in the natural world (Park, 2008). For example, most would reject the idea that witches fly through the air on broomsticks, despite the many images of these feats at Halloween and in Harry Potter movies. Although most people today accept scientific ideas, they may at the same time accept some superstitions and magical thinking. But how can an individual maintain such superstitions and believe in scientific thinking at the same time (Gelman, 2011)?

literature review A summary and analysis of research studies on a particular subject, often organizing the studies according to the hypotheses or theories that they support or do not support.

The inductive argument that follows is based, in part, on literature reviews by various authors who address the question of whether instruction improves students' critical thinking (CT) (e.g., Abrami et al., 2008; Bensley, 2011; Halpern, 1993, 2014). A literature review summarizes and analyzes research studies on a particular subject, often organizing the studies according to the hypotheses or theories that they support or do not support. It contains many basic arguments linked together to make an extended argument supporting one side or the other. The reviewer evaluates the evidence and arguments made to generalize from them and draw an inductive conclusion. Unlike the typical literature review, the extended argument presented in Table 3.2 contains evidence written in the form of a series of numbered statements followed by a conclusion to help you see how an inductive argument works. Typical literature reviews, more like the ones found in later chapters of this book, are written in essay format with paragraphs. As you read through the following inductive argument in Table 3.2, ask yourself, "What general conclusion should I draw from the evidence?" Conclusion: Certain types of instruction improve students' CT, especially when instruction explicitly targets CT skill development through specific lessons and feedback.

Inadequate comparison Failing to make a sufficient or adequate comparison when a comparison is implied Complete a contrast or comparison with a group, person, or condition. Weasel words Words that qualify or hedge so much that statements lose their force or importance Use words to describe conclusions that qualify them only as much as the strength of the evidence supports. Vagueness Using words that are imprecise, abstract, or too general Use the most specific and precise words you can. Ambiguity Using a word or making a statement that has more than one meaning or interpretation. Use words/statements with a single meaning in a particular context. Equivocation Shifting the meaning of an important word in an argument from its original meaning to a different one. Define important terms and stick to the same meaning.

This book's purpose is to help you learn how to think more effectively about questions, both in psychology and in everyday life. This is very different from telling you what to think or believe. Learning how to think critically about psychological questions is important because it often results in psychologists producing their best thinking. It is also important for you, as a student and citizen, to learn how to draw well-reasoned conclusions because doing so will serve you well throughout life. We live in the "Information Age," wherein we are bombarded with vast amounts of information from the media, online sources, and new scientific research studies. We need critical thinking (CT) to help us sort through, analyze, and evaluate this ocean of information. But what exactly is critical thinking? Experts have offered many definitions of CT (e.g., Ennis, 1987; Halpern, 1998; Lipman, 1991; Paul, 1993). To simplify, we will examine some of the most important and consistent ideas emerging from attempts to define it. Robert Ennis, a recognized authority on CT, has provided a commonly cited definition that emphasizes the practical aspects of CT: "Critical thinking is reasonable, reflective thinking that is focused on deciding what to believe or do" (Ennis, 1987, p. 9). If CT really is practical, then it should help a person decide what to believe or do on a wide range of questions—from personal decisions such as "Should I take this nutritional supplement to help my cold?" to more scholarly questions such as "What is the best theory of depression?" To decide what to believe or do, a person's thinking should be reasonable, showing a careful consideration of the relevant evidence and examining the reasons for believing or not believing some claim. The word reasonable also implies that thinking is sound, logical, and fair. To say that CT is "reflective" means that it involves thinking deeply about things, especially about the quality of one's own thinking. To help us think critically about both scientific and everyday questions, we will use a similar definition of CT: Reflective thinking involved in the evaluation of evidence relevant to a claim so that a well-reasoned conclusion can be drawn from the evidence. Examining the word critical can help, too. In everyday language, being "critical" often means being judgmental or making overly negative comments. In contrast, referring to one's thinking as "critical" implies careful evaluation or judgment (Halpern, 2014). The latter sense of the word critical reflects its origins in the Greek word kriterion, derived from crites, meaning "judge" (Beyer, 1995). Notice that kriterion is very similar to the English word criterion, which refers to an accepted standard used in judging. Critical thinkers use criteria to make reasoned judgments (Lipman, 1991).

pseudoscience An approach that masquerades as real science, but makes false claims that are not supported by scientific research and does not conform to good scientific method.

This counterclaim and corresponding evidence that run counter to or disagree with the first claim are referred to as a counterargument. counterargument A claim and any corresponding evidence that run counter to or disagree with a previous claim or argument. One kind of thinking error that clearly violates this basic rule is called argument from ignorance or possibility. argument from ignorance or possibility A type of thinking error violating the basic rule that a person should argue from true statements, from knowledge the person possesses, and from high-quality evidence to draw well-reasoned conclusions.

variable A characteristic or event of interest that can take on different values. relationship The connection or correlation between two variables such that changes in one variable occur consistently in relation to the other variable. hypothesis The predicted relationship between two or more variables.

To answer the many questions posed in a particular field, the scientist looks for lawful relations among variables by using specific methods and techniques based on rules for reasoning effectively about data. For example, suppose a social psychologist wants to find out why a person is helpful in one situation but not in another. She would study the variable—helpfulness—under various conditions. A variable is a characteristic or event of interest that can take on different values. In everyday, nonscientific terms, you might imprecisely refer to someone as "selfish" or "helpful." The psychologist would more precisely define the variable of helpfulness, in terms of how many times the research participant was observed to help. Or the researcher might rate the participant on a selfishness scale ranging from 1 = not at all helpful to 7 = extremely helpful. This illustrates two ways to operationalize, or represent, the variable in terms of methods, procedures, and measurements. Assigning a number to a helpfulness scale is a useful first step in describing a single variable, but it is limited in what it tells us. To validly measure the construct underlying helpfulness, scientists would use other reliable gauges that could provide converging information about it (Grace, 2001). Understanding something complex, such as helping behavior, requires that we understand the relationships between variables, not just one variable. A relationship between variables indicates that the values of one variable change consistently in relation to the values of another variable. Scientists seek to express relationships in precise terms that can be observed. For instance, on average, participants who score low—say, a 2—on the helpfulness scale tend to score higher on the amount of time spent talking about themselves in a 10-minute conversation—say, 8 out of 10 minutes. Scientists refer to the predicted relationship between two or more variables as a hypothesis. More formally, a hypothesis is often deduced from a theory in the form of a specific prediction, as discussed in Chapter 2. It is a claim about what will happen if we assume that some theory is true. For example, from the general theory that people are basically selfish and motivated by self-interest, we might predict that if participants have the opportunity to help someone else in a new situation, they will not help. Hypotheses can also originate from other sources, including personal experience. A hypothesis typically makes a prediction about one of two types of relationships: (1) an association or (2) a cause-and-effect relation. In an association, sometimes referred to as a correlation, the values of two variables are simply related or change together in a consistent way. In a positive association, as one variable increases, the other variable increases along with it; or as one variable decreases, the other variable tends to decrease. For instance, the more helpful people are, the more likely they are to volunteer. Putting this hypothesis in terms of scores on a 7-point helpfulness rating scale, we might say, "We predict that the higher a person's helpfulness rating score, the more often that person will volunteer to help." It is also true in this positive association that the lower the score on the helpfulness scale, the lower the tendency to volunteer—which illustrates that in a positive association, the values of two variables change in the same direction. In a negative association between variables, the values of the two variables consistently change together in the opposite direction. Stating this as a correlational hypothesis for a research study, we might say, "We expect that the higher a participant's score on the 7-point helpfulness scale, the less time that participant will spend doing something to benefit himself or herself." Showing a cause-and-effect relationship takes more than simply showing an association between variables. To show causation, changes in one variable (the cause) must occur before changes occur in the other variable (the effect). Consider this causal hypothesis: "If one group studies a list of words for longer than another group does, then the group that studied longer will recall more on a test of the new words." In this hypothesis, how long people study (the cause) must happen before the effect (how many new words they learned). It makes no sense for a causal form of this hypothesis to predict that people will first recall the new list of 30 words and then will study the words for a longer or shorter time. They are not new words if one has already recalled them. People find relationships between variables in their daily lives, not just in scientific research. But how good are people at assessing whether variables are correlated? Look at the data in Table 4.1, showing the frequency of cases in which an abnormal behavior is either present or not present in relation to the moon's phase (full or not full). Suppose, as shown in cell A, that people working at a mental health facility observed 12 cases in which the moon was full and people behaved abnormally. Do the data in the fourfold table show that the presence of a full moon is related to abnormal behavior? Examining the data in Table 4.1, many of the mental health facility staff mentioned in Chapter 1 would likely find a correlation between the full moon and abnormal behavior, concluding that people tend to behave abnormally during a full moon. But they would be mistaken. This thinking error occurs because people tend to notice co-occurrences of events, like those shown in the higher frequencies of cell A, and do not take into account the frequencies in other cells of the table. You are not likely to hear people say, "Hey, there's a full moon tonight, but nobody is behaving strangely!" (Kohn, 1990). This demonstrates how people often fail to take into account the six cases in cell C who are not behaving abnormally during a full moon. Nor are people likely to say, "Wow, the patients are behaving abnormally, but there's not even a full moon!" which demonstrates how they tend to ignore the six cases in cell B. We must take these cases into account because they provide evidence that no relation exists. What does the research say about people's ability to analyze data like these?

The idea that clinicians are prone to certain kinds of thinking errors may run counter to our idealized conception of them as experts, but the research clearly shows that they make many of the same thinking errors as people with less training. Reviews of research on clinical judgment have documented how clinicians make errors based on representativeness, availability, and hindsight bias, just as do people with less training (Dumont, 1991; Faust, 1986; Garb, 1998; Ruscio, 2007). The error of rapid diagnosis occurs when clinicians make a diagnosis within just a few minutes. Gauron and Dickinson (1969) found that when psychiatrists viewed a filmed interview between therapists and clients, the psychiatrists often formed diagnostic impressions in as little as 30 to 60 seconds. Given that good differential diagnoses require collection and consideration of many different kinds of client data, this tendency suggests that clinicians who make rapid diagnoses may be jumping to hasty generalizations about their clients, prematurely drawing a conclusion before carefully considering all the relevant, available information. In fact, some early studies showed that psychiatrists sometimes made diagnoses no better than their secretaries (Goldberg, 1959; 1968). Another reason for errors in diagnosis may be that clinicians sometimes tend to engage in too much "backward reasoning" at the expense of "forward reasoning." As Dumont (1991) has noted, clinical reasoning is much like the theory building of scientists. Clinicians engage in forward reasoning when they collect information about their clients and try to find patterns in it, forming hypotheses about possible causes of the behavior. They engage in backward reasoning when they test hypotheses using theory and their knowledge to decide whether the evidence obtained is consistent with their hypotheses concerning the client's problem. Good reasoning depends on both forward and backward reasoning. Unfortunately, when a clinician considers only one hypothesis or is overly influenced by preconceptions, this narrow perspective can lead to excessive backward reasoning and poor diagnoses. Many studies suggest that clinicians are often biased by their background knowledge and expectations. For example, Temerlin (1968) found that clinicians who were told which diagnosis an individual had previously received from high-prestige clinicians were more likely to give that same diagnosis than clinicians who were not given that information. Stereotypes can also induce clinicians to reason backward from unwarranted assumptions they make about clients. Garb (1997) reviewed the accuracy of clinical judgments in several studies in which bias effects were replicated. He found that Hispanics and African Americans were more often misdiagnosed with schizophrenia than were Whites in those cases in which clients had psychotic affective disorders. Consistent with gender stereotypes, males tended to be diagnosed more often with antisocial personality disorder compared to females, whereas females tended to be diagnosed more often with histrionic personality disorder (i.e., excessive displays of emotion). Finally, a bias for social class was demonstrated in the tendency of clinicians to refer more middle-class people for psychotherapy than lower-class individuals. Preconceptions about people who seek help for psychological problems may also create expectations about the severity of those problems. The fact that someone is consulting a clinician implies that the individual has a problem or at least has symptoms that need to be explained (Ruscio, 2007), which sets in motion a search for the potential causes of the individual's symptoms. If the clinician is not wary, the human tendency to find patterns in unrelated events may lead to the erroneous conclusion that the client has a disorder, when none actually exists. Clinicians sometimes find illusory correlations in client data, finding disorders that are not there (Chapman & Chapman, 1967). In addition, hindsight bias and knowing the outcome of a client's behavior may make a client's diagnosis appear to have been inevitable. A clinician may think, "Of course, my client has a substance abuse disorder involving alcohol. He was referred to me because he was charged with driving while intoxicated after he wrapped his car around a tree." Succumbing to these thinking errors may be partly why clinicians tend to overdiagnose their clients.

rapid diagnosis A diagnostic error in which clinicians make a diagnosis within just a few minutes; compare with hasty generalization. forward reasoning The practice of collecting information about clients to try and find patterns in it, forming hypotheses about possible causes of the behavior. excessive backward reasoning Reasoning about clients' problems in which a clinician is overly influenced by preconceptions, perhaps considering only one hypothesis.

According to the American Psychological Association's ethical guidelines, clinicians have a responsibility to use tests and inventories that demonstrate reliability and validity. Reliability refers to the consistency in the measurement of scores on a test. A test that is reliable should reveal similar scores when people take the test multiple times. Likewise, if two different people are scoring the same person on a particular measure, they should assign similar scores. In addition, if two different clinicians are reliably diagnosing the same person, they should give the same diagnosis. In contrast, the validity of a test concerns whether a test or measure is actually measuring what it was intended to measure. For instance, a valid test of depression like the Beck Depression Inventory should predict things like whether admission to a mental health facility for depression is necessary and should be positively correlated with other established measures of depression. Decisions about which assessment procedure to use may also depend on the clinician's training and theoretical perspective. On the one hand, a behavior therapist is likely to use observation of a client's problem behaviors. On the other hand, a psychoanalyst would be much more likely to use a projective technique, such as the Rorschach Inkblot Test (Figure 13.2). The Rorschach test is based on the assumption that the client's interpretation of these ambiguous figures will reveal unconscious conflicts that should be resolved in therapy. But how good is this test? The effectiveness of the Rorschach test and its more recent interpretive method, the comprehensive system developed by Exner (1974), have not been well supported as tools for assessment (Wood, Nezworski, Lilienfeld, & Garb, 2003). Even so, the Rorschach test remains popular among clinicians due to the amazing ability of some experts to use just a few responses to the inkblots to accurately describe clients' personalities and psychological problems. It is likely that the experts' apparent accuracy depends on their ability to provide plausible, general descriptions of the people who take this test (Wittenborn & Sarason, 1949). Yet these same experts are unable to make specific predictions about the individuals whose responses they have interpreted, suggesting that the Rorschach test lacks validity. Further challenging the validity of the Rorschach test are studies reviewed by Wood and colleagues (2003), which showed that scores on self-report inventories thought to identify certain disorders were not related to interpretations of Rorschach responses. Other research has found that different clinicians offer markedly different interpretations of the same clients' responses to the inkblots, suggesting that the Rorschach test is subjective and not reliable. Taken together, these findings suggest that the notion that the Rorshach test can accurately tell us a great deal about an individual's personality is a misconception. Clinicians' use of general descriptions of people in interpreting Rorschach drawings is akin to the "Barnum effect," in which people accept very general statements in their horoscopes as accurate descriptions of themselves. Kadushin (1963) called them "Aunt Fanny descriptions"—that is, statements written so broadly that they could be true of anyone's "Aunt Fanny." In a demonstration of how little diagnostic information such case descriptions provide, Forer (1949) asked students to complete a psychological test he dubbed the "Forer Diagnostic Interest Blank." He told them that based on their test results, he would give each student an individualized diagnostic description. In reality, each student received the exact same diagnostic description, which included general statements such as "You have a tendency to be critical of yourself" and "At times you are extroverted, affable, and sociable, while at other times you are introverted, wary, and insecure inside" (Forer, 1949, pp. 120-123). Even though they were given the same description, the students rated Forer's general assessments as very revealing of their basic personality characteristics. Kadushin (1963) obtained similar results when he gave 60 supervisors of social work students the same diagnostic case summary. Three groups of 20 supervisors were issued the same summary for three distinct social work cases—yet the supervisors rated the quality of the diagnostic summary as equally descriptive of the three different cases. This result suggests that general case descriptions provide very little useful information for diagnosing people (Meehl, 1973). The practical implication is that clinicians who do not ask relevant, specific questions in a clinical interview will not learn the particular signs and symptoms they need to make a specific diagnosis.

reliability The consistency of scores obtained on a test or other measure. validity The degree to which a test or measure actually measures what it was intended to measure.

M-B (mind-brain) dualism A position which assumes that the mind and body are two different entities. monism The position that the mind and brain are really all one thing. materialism A form of monism that assumes the world is physical and thus the mind and brain are both fundamentally physical entities. reductionist A materialist who assumes that scientific research will ultimately be able to explain all mental states in physical terms when relevant biochemical and other physical changes within the nervous system are understood. functionalist A scientist who regards mental states and processes as something to be explained in their own right, and who assumes that mental processes serve certain functions

To understand how various thinkers have approached the M-B problem, we focus on four common positions. Probably the most prevalent is M-B dualism, a position which assumes that the mind and body are two different entities—exactly as the term dual implies (Hergenhahn, 1992). Some dualists believe that the mind and the body are two different substances or have completely different properties (Brook & Stainton, 2000). The French philosopher René Descartes contributed much to this dualistic view. His famous saying, "I think, therefore I am," shows that Descartes did not doubt the reality of his mind doing the doubting, even though he could doubt the reality of the physical world. Although the soul or mind seems to lack physical substance, his scientific observations told him that the body was a physical entity. To resolve this conflict, Descartes proposed that the nonphysical mind and the physical body are two separate entities that interact by means of a small gland near the center of the brain called the pineal gland. Many dualists believe that although our bodies are limited by time and space, our minds or souls can go beyond these limitations, even surviving physical death. For example, many Christians believe that the body is material, or physical, and that the soul is immaterial, or nonphysical (Robinson, 1981). The nonphysical soul resides in a person's body during life, but it leaves the physical form that contains it at the time of death. It is not surprising, then, that M-B dualists also tend to believe that when people have an OBE, the mind actually leaves the body (Blanchfield, Bensley, Hierstetter, Mahdavi, & Rowan, 2007). In contrast to dualism, many other philosophers, psychologists, and scientists endorse monism, the position that the world is really all one thing, exactly as the term implies. According to monists, the mind and the body simply appear to be different—but they are actually one entity. The most common form of monism, called materialism (or physicalism), assumes the world is physical and governed by the influence of the environment on the physical body. Mental experience, therefore, is just an aspect of the physical operation of the brain. Still other monists believe that the mind and the brain are really identical and that every mental state has a corresponding brain state. Some materialists believe that if we knew enough, we would see that all behavior and mental processes could be reduced to the brain processes operating in the physical world. These so-called reductionists assume that scientific research will ultimately be able to explain mental states when we understand the relevant biochemical and other physical changes that occur in the nervous system. Accordingly, reductionists focus on research investigating how the nervous system provides the basis for behavior and mental processes. Still other scientists, known as functionalists, regard mental states and processes as something to be explained in their own right and assume that mental processes serve certain functions. Although many functionalists are sympathetic to the claims of materialism, they do not assume that all behavior and mental processes can be reduced to physical events. They might agree that the function of the brain is to produce mental processes and experience, but they are less concerned with how the brain accomplishes this amazing feat.

Conspiracy theories are enacted using three classic roles: conspirators, saviors, and dupes (Ruscio, 2006). In the Pizzagate example, the conspirators who supposedly engaged in the nefarious activities were Hillary Clinton, John Podesto, and Comet Ping Pong. The saviors were those who called attention to the conspiracy online and who recruited other saviors to find evidence of the conspiracy in the emails. The dupes were all the mainstream people being fooled by the conspiracy, who accepted the conventional explanation that Comet Ping Pong was just a pizzeria and that pizza was just pizza. Many conspiracy theories, such as Pizzagate, are political in nature. Two other good examples are the theory that President George W. Bush and government officials knew about the 9/11 attacks on the World Trade Center and the Pentagon beforehand, and the theory that Lee Harvey Oswald was not the lone gunman who assassinated President John F. Kennedy, as the official Warren Commission had concluded, but was rather part of a conspiracy organized by the CIA, the Mafia, or both. Conspiracy theories like these often arise in response to national catastrophes that leave people feeling powerless and vulnerable in the face of situations that are beyond their control. Likewise, believers in conspiracy theories are often marginalized members of a group who lack power and seek a scapegoat or someone to blame for their disadvantaged situation. Conspiracy theorists believe that they are "in the know," so their special knowledge of the conspiracy may empower them, raise their self-esteem, and make them feel morally superior to those who do not know the hidden truth (Byford, 2011). People who disseminate fake news and false conspiracy theories to individuals in their social media network and to other web-based outlets are likely sending this information to people who are receptive to it. This trend suggests that people who accept fake news and false conspiracy theories are motivated to draw a conclusion that they likely already favor, as discussed more fully in the next chapter. Indeed, the tendency for people to seek out partisan news outlets that are in agreement with their own views makes the web function like an echo chamber in which people hear what they want to hear. This propensity seems to support the idea that conspiracy theorists are irrational and not inclined to critically examine all sides of an argument. Ironically, Harambam and Aupers (2017) have recently found that conspiracy theorists tend to view themselves as critical freethinkers, distinguishing themselves from the "sheepish" people in the mainstream who fail to question conventional explanations. Are they right? Sometimes conspiracy theorists are correct, as happened when an international conspiracy of organized crime groups was suspected of colluding to plan and coordinate criminal activities—a theory that was verified when a meeting of these crime groups was discovered in upstate New York. Yet, as we have seen, many conspiracy theories are implausible and do not seem to be well-reasoned. At this point, it is instructive to examine other conspiracy theories that make claims related to science. For instance, climate change deniers maintain that the notion that human activity causes global warming (anthropogenic global warming) is a hoax perpetrated by scientists and politicians. A second science-related conspiracy theory asserts that NASA's landing of astronauts on the moon was a hoax staged by NASA and shot in the desert to make it appear as though the United States beat the Soviet Union in the space race to the moon. Another popular conspiracy theory that is partly science fiction claims that the U.S. military and government have engaged in a massive cover-up of the 1947 crash of an alien spacecraft in Roswell, New Mexico. Despite convincing scientific evidence to the contrary, many people still maintain that this conspiracy theory is true, which suggests that the theory has met important criteria for determining that it is pseudoscientific (see Table 5.1). Thus, it is useful to compare false, science-related, conspiracy theories to pseudoscience. Like pseudoscience, science-related conspiracy theories take on the appearance of real science, but they do not develop the way scientific theories do. Good scientific theories are largely consistent with high-quality research evidence, but conspiracy theorists seldom test their hypotheses in any rigorous fashion. When a scientific test is conducted and does not support their predictions, conspiracy theorists often add assumptions to create excuses for their failed predictions, making the theories ever more complex and ultimately unfalsifiable. In contrast, true scientists value parsimony, or keeping their theories as simple as possible while still accounting for the data. Lke proponents of pseudoscience, conspiracy theorists also commit errors in reasoning, such as arguing that if 100% of all scientists do not accept the theory of anthropogenic global warming, then it may not be true, raising further doubts about the theory. But considering that 97% of climate scientists agree that, based on scientific research, human activities are indeed contributing to global warming (Cook et al., 2016), it seems that the conspiracy theorists are trying to shift the burden of proof away from their side to escape disconfirmation (see Chapter 2). This example also shows how conspiracy theorists commit another thinking error known as black-and-white thinking, also called either-or thinking, when they propose that there are only two extreme positions or options to be considered. Climate change deniers do this when they encourage people to incorrectly conclude that if there are any doubts about a scientific theory, then the theory must be false (Prothero, 2013). But acceptance of a scientific theory is not all or none and depends on the strength of the evidence overall. These examples suggest that to avoid drawing incorrect conclusions from misinformation on the Internet, we must use good scientific reasoning and be alert to potential thinking errors. Studies suggest that people who take a more analytic approach to conspiracy theories endorse them less (Van Prooijen, 2017) and that exposing conspiracy theory believers to rational counterarguments can reduce their belief in that theory (Orosz et al., 2016). Nevertheless, convincing believers to reject false conspiracy theories is challenging, especially when the believers' worldview conflicts with scientifically accepted theories (Lewandowsky & Oberauer, 2016) and when believers are prone to a more intuitive thinking style (Swami, Voacek, Stieger, Tran, & Furnham, 2014).

black-and-white thinking A type of thinking error in which only two extreme positions or options are offered when others could be considered, also called either-or thinking. See also false dilemma and false dichotomy.

Different scientific research methods and designs provide evidence that varies in quality. Because the methods of science involve using observation to test hypotheses and to evaluate theories, the quality of the evidence offered by scientific research depends on the ability to collect high-quality data. The best evidence comes from studies in which observations were made with objectivity, without error, and under carefully controlled conditions. The quality of the evidence provided by scientific research methods also depends on the degree to which a particular method can establish a causal relation between variables. Recall that the goals of psychology as a science are to describe, predict, explain, and control or manipulate behavior. In order to reach the important goal of explaining behavior, we must be able to show the causes of behavior. When we speak of a cause, we are referring to something that has produced an effect. A cause precedes the event it produces (the effect). Knowing the cause can help explain why the effect happened (Zechmeister & Johnson, 1992). To better understand how something might be the cause of some behavior, let's look closely at the three criteria for establishing causation, shown in Table 4.2. Recall that a criterion is a standard that must be met or a condition that must be present in order to confirm that something is true. To illustrate the use of these criteria, let's apply them to the question of whether precognition caused the supposedly correct prediction of the plane crash in the Lee Fried example from Chapter 3. To show that the covariation criterion was met, we would have to demonstrate that Fried's precognitive ability and the predicted event changed together. We would have to prove that when Fried had his premonition, it was systematically related to the crash. The two events appear to have occurred close together in time, suggesting that covariation was present. Also, it appears that the precognition occurred before the letter was delivered, suggesting that the criterion of time order had been met, although no other verification of this criterion was demonstrated. Finally, Fried's handing over a sealed letter to a public figure suggested that the letter would not be tampered with and that the event could be documented. This appears to eliminate the alternative explanation of cheating as the cause of the correct prediction. A closer examination of the events shows that none of the criteria were actually met. Fried's confession that he was a magician, which implies that he used deception, suggests that the two events that actually covaried were (1) the swapping of the letters and (2) the reading of the new letter, which was presumed to be the original letter with the prediction. But the contents of the letter had been put in the envelope after the jumbo jet crashed, so no premonition occurred before the event. Thus, time order was not met. Nor were two plausible alternative explanations eliminated. First, someone should have tried to falsify Fried's claim by checking how many other events he had predicted accurately. Psychics frequently guess, so sometimes their predictions do turn out to be right, simply by chance. The second, much more plausible, explanation someone should've checked before concluding that precognition was the cause is that Fried engaged in trickery. For example, did Fried ever have access to the letter after the crash occurred? Only Fried's word supported his use of precognition, and he later cast doubt on his claim that he had precognitive ability. It is clear from analysis of this example that it is virtually impossible to demonstrate causation in an anecdote. In fact, only the true experiment, one of the scientific research designs we discuss next, can put us in the position to infer causation.

cause An event that precedes another event and produces an effect in the second event; it meets all three criteria for showing causation. criteria for establishing causation Three conditions that must be met to show something is a cause: (1) covariation, (2) time order, and (3) elimination of plausible alternative explanations. Three Criteria for Establishing Causation 1. Two events must covary or vary together consistently (covariation). 2. One event must occur before the other (time order). 3. Plausible alternative explanations for the covariation must be eliminated.

Case study Detailed description of one or a few subjects Provides much information about one person May inform about a person with special or rare abilities, knowledge, or characteristics May be unique and hard to replicate May not generalize to other people Cannot show cause and effect Naturalistic observation Observations of behavior made in the field or natural environment Allows observations to be readily generalized to the real world Can be a source of hypotheses Allows little control of extraneous variables Cannot test treatments Cannot show cause and effect Survey research A method, often in the form of a questionnaire, that allows many questions to be asked Allows economical collection of much data Allows for study of many different questions at once May have problems of self-reports, such as dishonesty, forgetting, and misrepresentation of self May involve biased sampling Correlational study A method for finding a quantitative relationship between variables Allows researcher to calculate the strength and direction of relation between variables Can be used to make predictions Does not allow random assignment of participants or much control of subject variables Cannot test treatments Cannot show cause and effect Quasi-experiment A method for comparing treatment conditions without random assignment Allows comparison of treatments Allows some control of extraneous variables Does not allow random assignment of participants or much control of subject variables Cannot show cause and effect True experiment A method for comparing treatment conditions in which variables can be controlled through random assignment Allows true manipulation of treatment conditions Allows random assignment and much control of extraneous variables Can show cause and effect Cannot manipulate and test certain variables May control variables and conditions so much that they become artificial and unlike the "real world"

confusing correlation with causation A type of thinking error in which one infers that one of two correlated variables is the cause of the other. post hoc reasoning A thinking error in which a person infers that some event that occurred before a second event was the cause of the second event; shortened from post hoc ergo propter hoc ("after this, therefore because of this").

The history of the treatment of mental disorders has been a mixture of prescientific, pseudoscientific, and even scary practices that have sometimes led to the more recent development of treatments that really work. For centuries, and in some places even today, people have used various techniques to cast out evil spirits believed to cause strange and unacceptable behaviors in "possessed" people. Psychologists would now regard many of these unfortunate people as having suffered from mental disorders. Sometimes an effective technique, such as electroconvulsive shock therapy (ECT), may have developed from earlier efforts to shock some evil entity out of a person. ECT, which to some might seem barbaric, has been refined so that nowadays the patient is unconscious during treatment to minimize the distress. Clinical researchers have found ECT to be an effective treatment for depression that is resistant to other therapies (Pagnin, de Queroz, Pini, & Cassano, 2004). In contrast, other treatments—such as the psycho-surgery practiced in the 1940s, in which an ice pick was inserted through the nose to destroy frontal areas of the brain in an effort to control aggression—have been discarded because they were found to be ineffective. These examples raise the important question of how we find out which treatments really help people with psychological problems and which treatments are ineffective or might even be harmful. This concern reflects an increasing commitment among many psychologists to evidence-based treatments (EBTs)—that is, treatments for psychological problems whose effectiveness is validated through high-quality scientific research. Paralleling this emphasis on EBTs is the concern that some psychotherapists use treatments that have no scientifically demonstrated effectiveness and that might be pseudoscientific. To better understand the development of this emphasis, it is useful to briefly review the history of the movement toward EBTs in psychology and psychiatry. In a review tracing the evolution of EBTs, Gordon Paul (2007) began with what he called the "prescientific era" in psychology, which lasted until the 1920s. During this time, different schools of psychology had their own approaches and made little effort to empirically examine the effectiveness of the treatments they advocated. The movement toward EBTs partly coincides with the development of behavior therapy and its many spinoffs, which borrowed heavily from learning theory and the behaviorist approach while testing treatments scientifically. In 1924, Mary Cover Jones, a student of J. B. Watson, applied the behaviorist learning theory approach to help Peter, a young boy with a phobia (irrational fear) of rabbits. She first allowed Peter to observe other children playing with a rabbit without any negative effects. Then she exposed Peter to the rabbit when he was not showing fear. Over time, Peter was able to move closer and closer to the rabbit without showing fear. Later, Joseph Wolpe (1958) developed a behavior therapy called systematic desensitization, combining elements of Jones's treatment with a relaxation technique that Edmond Jacobsen (1935) had shown to be effective in reducing anxiety. In systematic desensitization, a phobic person learns to move closer to a feared object or engage in a feared activity while being helped to relax. The behaviorists' early attempts to test the outcomes of treatments based on learning-theory principles were important to the scientific study of psychotherapy, but those early studies were often case studies and simple demonstrations. After World War II, many clinicians, who had previously been trained in research but got involved in clinical and applied work during the war, became increasingly concerned about the best way to train their new students for clinical practice (Paul, 2007). This concern eventually led to an important conference in Boulder, Colorado, in 1950 that addressed graduate education in clinical psychology (Raimy, 1950). Out of this conference came the "Boulder Model," also known as the scientist-practitioner model, which proposes that graduate programs in clinical psychology should train psychotherapists to become both research scientists and practitioners. Increasing attention to the scientific study of psychotherapy was followed by yet more conferences focused on the scientific basis of psychotherapy, but some clinicians continued to resist the idea that psychotherapy could be studied scientifically. Much debate centered on the "criterion problem," or how to determine what makes one outcome of psychotherapy better than another. Recall that critical thinkers seek to carefully define their terms and are often concerned with the criteria, standards, and conditions that must be met for a statement to be considered true. Increasingly, psychologists came to agree that demonstrating the effectiveness of psychotherapy would require careful use of scientific research methods like those applied in other parts of psychological science. In turn, clinical researchers needed to carefully design experiments that would allow them to unambiguously interpret the results of manipulating independent variables, such as the comparison of various treatments. An important push in this direction came in the 1960s and 1970s, from clinical research that focused on behavior therapies. After conducting many studies, researchers discovered that cognitive versions of some therapies were effective as well. For example, covert desensitization, in which a phobic person simply imagines moving closer to the feared object, also helped some patients overcome their phobias. Recall from Chapter 1 that Bandura's social learning theory assumes that observing someone engaging in a behavior makes it more likely the observer will also engage in the behavior (especially if doing so was reinforced). This is likely a reason why Mary Cover Jones was successful in treating Peter—because Peter saw the other children playing with a rabbit without suffering any dire consequences and was subsequently able to imitate their behavior. Still other types of cognitive behavior therapies were developed that could effectively treat depression. This led to refinements of the original behavioral learning theory, called cognitive theories of depression, and more generally to treatments called cognitive behavior therapy (Beck, 1963). Although the theory of how cognitive behavior therapy works still lags behind the therapeutic technology, scientific research has made some progress in this area, improving the theory's predictive and explanatory power (Beck, 2005). This lag of theory behind practice is common in the history of science. Researchers have discovered many effective treatments before formulating a theory as to why a particular treatment was effective. For instance, certain presurgical sedatives were observed to also help reduce psychosis in patients with schizophrenia—that is, to ease these patients' severe symptoms that reveal a disconnection from conventional reality. Subsequently, scientists began the careful, systematic study of the biochemical causes of schizophrenia that would explain why these drugs worked. This line of research ultimately led to the hypothesis that antipsychotic drugs block high levels of the neurotransmitter dopamine at certain receptor sites in the brain. For many clinical researchers, the efficacy of a specific treatment came to mean how well people functioned after receiving that treatment when compared with other treatments for the same problem under well-controlled testing conditions. Specifically, the gold standard of efficacy research has become the randomized-trial experiment (Gaudiano, Dalrymple, Weinstock, & Lohr, 2015). In such studies, experimenters randomly assign clients with a certain psychological disorder to one of two groups: One group undergoes a specific treatment, while the other serves as a control group whose members either wait for treatment or receive some mock or alternative treatment. In this way, the effectiveness of treatments can be compared under controlled conditions, thereby reducing the chance that some extraneous variable might confound the results (see Chapter 4). Clinical researchers use several strategies to control expectations and other potentially confounding variables to determine how effective a treatment actually is. For example, placebo groups are used to control for expectations created by the appearance of getting an effective treatment. In such a study, the experimental group receives the real treatment or therapy, and the placebo control group, which does not receive the treatment, is given a fake or treatment that looks like the active treatment but actually has no effect. By comparing the two groups, experimenters can be fairly sure that any observed effect was due to the treatment and not the expectations associated with receiving a treatment. Another potential confounding variable, called spontaneous remission, can mask the true effectiveness of a treatment. In spontaneous remission, a person recovers from a problem spontaneously over time without the aid of a treatment. You have probably observed yourself recover from a condition on your own (spontaneously) without taking any kind of remedy. If a person gets better during the same time that treatment is given, it is difficult to determine whether that improvement was due to the treatment or to spontaneous improvement; alternatively, both factors might contribute to the improvement. Once again, the best way to determine the true relationship is to randomly assign participants to a control group that does not get the treatment, in which some participants are expected to improve spontaneously. When the control group's outcomes are compared with those of the group whose members actually received the active treatment, any observed effects can be shown to be over and above the usual rate of spontaneous remission.

evidence-based treatments (EBTs) Treatments for problems and disorders whose effectiveness is validated through high-quality scientific research. behavior therapy A type of psychotherapy that approaches the treatment of psychological problems by helping clients develop more adaptive behaviors. efficacy The effectiveness of treatments demonstrated by clinical research studies. randomized-trial experiment A kind of true experiment used in clinical science in which experimenters randomly assign subjects to one of multiple groups: at least one that receives a treatment expected to be effective in treating a condition, and one that serves as a control group, often receiving a placebo. spontaneous remission A situation in which a person recovers from a problem unexpectedly over time without the aid of a treatment.

The critical thinker uses the basic tool of reasoning to come to a conclusion. Psychologists and thinkers in all disciplines use reasoning to help them think more clearly about the questions they ask and to advance the state of their knowledge. Reasoning is a powerful tool in this inquiry process because it prescribes conventional ways for us to use language so that arguments can be communicated clearly and analyzed consistently and effectively. In particular, it provides us with rules for relating evidence to conclusions. Two types of reasoning commonly used in psychology and other sciences are inductive and deductive reasoning. In inductive reasoning, we often reason from specific cases to a general principle, such as a theory or hypothesis. For example, the great nineteenth-century French neurologist Pierre Paul Broca made repeated observations on people with damage to the left hemisphere of the brain; he generalized from these cases that if the damage was to a specific area in the frontal lobe, as shown in Figure 2.1, the people would have difficulty speaking. For instance, one patient with damage to this area was referred to as "Tan" because no matter what he tried to say, it came out "tan" or "tan-tan" or "tan-tan-tan." Generalizing from these cases and others studied earlier by Marc Dax, Broca used inductive reasoning to support a theory of speech production. The theory stated that because this area in the frontal lobe (later named Broca's area, in his honor) regulates speech production, damage to it would produce a disturbance in speaking (Broca, 1966). The left side of Figure 2.2 illustrates how specific cases of Broca's area damage in which speech is disturbed lead to this conclusion (generalization). A major use of inductive reasoning in science is the justification of a general rule or theory. In deductive reasoning, we often proceed in the opposite direction, from the general theory to the specific case. For example, a psychologist may reason deductively from the general principle of Broca's theory to a specific case, such as Bill's, shown on the right side of Figure 2.2. We can use deductive reasoning to reason from a general theory to make a prediction about a case. Notice that on the right side of Figure 2.2, deductive reasoning starts with a theory, proceeds to a specific case, and concludes with a prediction about that specific case. In contrast, the left side of Figure 2.2 shows that inductive reasoning starts with specific cases of brain damage and then leads to a generalization, Broca's theory. Each kind of reasoning has somewhat different applications and rules, but both use evidence to draw conclusions about claims. In this chapter, we focus on the process of deductive reasoning; in Chapter 3, we explore inductive reasoning more closely.

inductive reasoning A type of reasoning in which one often argues from a specific case to a general principle, such as a theory or hypothesis; a generalization from bits of evidence. deductive reasoning A type of formal reasoning in which one often argues from a general theory or principle to a specific case.


Kaugnay na mga set ng pag-aaral

The Ideal Gas Law and the Gas Constant

View Set

The nurse is teaching a community group of working adults about ways to reduce their risk for pancreatic cancer. Which statement by a participant indicates a need for further teaching?

View Set