chapter 12

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

The Recognition Heuristic

We have emphasized that the decision-making heuristics are generally helpful and accurate. However, most of the examples have emphasized that judgment accuracy is hindered by factors such as recency and familiarity. Let's discuss a special case of the availability heuristic, which often leads to an accurate decision (Goldstein & Gigerenzer, 2002; Kahneman, 2011; Volz et al., 2006). Suppose that someone asks you which of two Italian cities has the larger population, Milan or Modena. Most U.S. students have heard of Milan, but they may not recognize the name of a nearby city called Modena. The recognition heuristic typically operates when you must compare the relative frequency of two categories; if you recognize one category, but not the other, you conclude that the recognized category has the higher frequency. In this case, you would correctly respond that Milan has the greater population (Volz et al., 2006). Keep this example of correct decision making in mind as you read the remainder of this chapter.

Reasons for Overconfidence

We have seen many examples demonstrating that people tend to be overconfident about the correctness of their decisions. This overconfidence arises from errors during many different stages in the decisionmaking process: 1. People are often unaware that their knowledge is based on very tenuous, uncertain assumptions and on information from unreliable or inappropriate sources (Bishop & Trout, 2002; Johnson, 2004). 2. Examples that confirm our hypotheses are readily available, but we resist searching for counterexamples (Hardman, 2009; Lilienfeld et al., 2009; Mercier & Sperber, 2011). You'll recall from the discussion of deductive reasoning that people also persist in confirming their current hypothesis, rather than looking for negative evidence 3. People have difficulty recalling the other possible hypotheses, and decision making depends on memory (Theme 4). If you cannot recall the competing hypotheses, you will be overly confident about the hypothesis you have endorsed (Trout, 2002 4. Even if people manage to recall the other possible hypotheses, they do not treat them seriously. The choice once seemed ambiguous, but the alternatives now seem trivial (Kida, 2006; Simon et al., 2001) 5. Researchers do not educate the public about the overconfidence problem (Lilienfeld et al., 2009). As a result, we typically do not pause—on the brink of making a decision—and ask ourselves, "Am I relying only on Type 1 thinking? I need to switch over to Type 2 thinking!" When people are overconfident in a risky situation, the outcome can often produce disasters, deaths, and widespread destruction. The term my-side bias describes the overconfidence that your own view is correct in a confrontational situation (Stanovich, 2009; Toplak & Stanovich, 2002). Conflict often arises when individuals (or groups or national leaders) each fall victim to my-side bias. People are so confident that their position is correct that they cannot even consider the possibility that their opponent's position may be at least partially correct. If you find yourself in conflict with someone, try to overcome my-side bias. Could some part of the other people's position be worth considering? More generally, try to reduce the overconfidence bias when you face an important decision. Emphasize Type 2 processing, and review the five points listed above. Are you perhaps overconfident that this decision will have a good outcome?

Illusory Correlation and Availability

We have seen that availability is typically a useful heuristic, although it can become "contaminated" by factors such as recency and familiarity, thus leading to inappropriate decisions about the true frequency of an event. Here, we examine how the availability heuristic can contribute to another cognitive error called an illusory correlation. The word illusory means deceptive or unreal, and a correlation is a statistical relationship between two variables. Therefore, an illusory correlation occurs when people believe that two variables are statistically related, even though there is no actual evidence for this relationship. According to the research, we often believe that a certain group of people tends to have certain kinds of characteristics, even though an accurate tabulation would show that the relationship is not statistically significant (Fiedler & Walther, 2004; Hamilton et al., 1993; Risen et al., 2007). Think of some stereotypes that arise from illusory correlations. These illusory correlations may either have no basis in fact or much less basis than is commonly believed. For example, consider the following illusory correlations: (1) Females have poor math skills, (2) people on welfare are cheaters. According to the social cognition approach, stereotypes can be traced to our normal cognitive processes. In the case of illusory correlations, an important cognitive factor is the availability heuristic (Reber, 2004; Risen et al., 2007). Chapman and Chapman (1969) performed a classic investigation of the illusory correlation. Their data showed that students formed an illusory correlation between people's reported sexual orientation and their responses on an inkblot test. Let's see how the availability heuristic might help to explain illusory correlations. When we try to figure out whether two variables are related to each other, we should consider the data about four categories in a 2 × 2 matrix. For example, suppose that we want to determine whether people who are lesbians or gay males are more likely than heterosexuals to have psychological problems.* Imagine, for example, that researchers gathered the data in Table 12.2. These data show that six out of 60 gay people (or 10%) have psychological problems, and eight out of 80 straight people (also 10%) have psychological problems. We should therefore conclude that sexual orientation is not related to psychological problems. Unfortunately, however, people typically pay the most attention to only one cell in the matrix, especially if the two descriptive characteristics are statistically less frequent (Risen et al., 2007). In this example, some people notice only the six gay people who have psychological problems, ignoring the important information in the other three cells. People with an established bias against gay people might be especially likely to pay attention to this cell. Furthermore, they may continue to look for information that confirms their hypothesis that gay people have problems. You'll recall from the earlier discussion of conditional reasoning that people would rather try to confirm a hypothesis than try to disprove it, consistent with Theme 3. Try applying the information about illusory correlations to some stereotype that you hold. Notice whether you tend to focus on only one cell in the matrix, ignoring the other three. Have you specifically tried to disconfirm the stereotypes? Also, notice how politicians and the media often base their arguments on illusory correlations (Myers, 2002). For example, they may focus on the number of welfare recipients with fraudulent claims. This number is meaningless unless we know additional information, such as the number of welfare recipients without fraudulent claims.

chose the name prospect theory to refer to people's tendencies to think that possible gains are different from possible losses. Specifically:

1. When dealing with possible gains (e.g., lives saved), people tend to avoid risks. 2. When dealing with possible losses (e.g., lives lost), people tend to seek risks. Numerous studies have replicated the general framing effect, and the effect is typically strong (Kahneman, 2011; LeBoeuf & Shafir, 2012; Stanovich, 1999). Furthermore, the framing effect is common among statistically sophisticated people as well as statistically naive people, and the magnitude of the effect is relatively large. In addition, Mayhorn and his colleagues (2002) found framing effects with both students in their 20s and with older adults. The research on framing suggests some practical advice: When you are making an important decision, try rewording the description of this decision. For example, suppose that you need to decide whether to accept a particular job offer. Ask yourself how you would feel about having this job, and then ask yourself how you would feel about not having this job. This kind of Type 2 processing can help you make wiser decisions (Kahneman, 2011)

Availability Heuristic

A second important heuristic that people use in making decisions is availability. You use the availability heuristic when you estimate frequency or probability in terms of how easy it is to think of relevant examples of something (Hertwig et al., 2005; Kahneman, 2011; Tversky & Kahneman, 1973). In other words, people judge frequency by assessing whether they can easily retrieve relevant examples from memory or whether this memory retrieval is difficult. The availability heuristic is generally helpful in everyday life. For example, suppose that someone asked you whether your college had more students from Illinois or more from Idaho. You haven't memorized these geography statistics, so you would be likely to answer the question in terms of the relative availability of examples of Illinois students and Idaho students. Let's also say that your memory has stored the names of dozens of Illinois students, and so you can easily retrieve their names ("Jessica, Akiko, Bob. . ."). Let's also say that your memory has stored only one name of an Idaho student, so you cannot think of many examples of this category. Because examples of Illinois students were relatively easy to retrieve, you conclude that your college has more Illinois students. In general, then, this availability heuristic can be a relatively accurate method for making decisions about frequency (Kahneman, 2011). A heuristic is a general strategy that is typically accurate. The availability heuristic is accurate as long as availability is correlated with true, objective frequency—and it usually is. However, the availability heuristic can lead to errors (Levy, 2010; Thaler & Sunstein, 2008). As we will see in a moment, several factors can influence memory retrieval, even though they are not correlated with true, objective frequency. These factors can bias availability, and so they may decrease the accuracy of our decisions. We will see that recency and familiarity—both factors that influence memory—can potentially distort availability. Figure 12.2 illustrates how these two factors can contaminate the relationship between true frequency and availability. Before exploring the research about availability, let's briefly review how representativeness—the first decision-making heuristic—differs from availability. When we use the representativeness heuristic, we are given a specific example (such as T H H T H T or Linda the bank teller). We then make judgments about whether the specific example is similar to the general category that it is supposed to represent (such as coin tosses or philosophy majors concerned about social justice). In contrast, when we use the availability heuristic, we are given a general category, and we must recall the specific examples (such as examples of Illinois students). We then make decisions based on whether the specific examples come easily to mind. So here is a way to remember the two heuristics: 1. If the problem is based on a judgment about similarity, you are dealing with the representativeness heuristic. 2. If the problem requires you to remember examples, you are dealing with the availability heuristic.

General Studies on Overconfidence

A variety of studies show that humans are overconfident in many decision-making situations. For example, people are overconfident about how long a person with a fatal disease will live, which firms will go bankrupt, and whether the defendant is guilty in a court trial (Kahneman & Tversky, 1995). People typically have more confidence in their own decisions than in predictions that are based on statistically objective measurements. In addition, people tend to overestimate their own social skills, creativity, leadership abilities, and a wide range of academic skills (Kahneman & Renshon, 2007; Matlin, 2004; Matlin & Stang, 1978; Moore & Healy, 2008). In addition, physicists, economists, and other researchers are overconfident that their theories are correct (Trout, 2002). We need to emphasize, however, that individuals differ widely with respect to overconfidence (Oreg & Bayazit, 2009; Steel, 2007). For example, a large-scale study showed that 77% of the student participants were overconfident about their accuracy in answering general-knowledge questions such as those in Demonstration 12.6. Still, these results tell us that 23% were either on target or underconfident (Stanovich, 1999). Furthermore, people from different countries may differ with respect to their confidence (Weber & Morris, 2010). For example, a cross-cultural study in three countries reported that Chinese residents showed greater overconfidence, and the U.S. residents were intermediate. However, the least-confident group was Japanese residents, who also took the longest to make their decisions (Yates, 2010) Let's consider two research areas in which overconfidence has been extensively documented. As you'll see, students are usually overconfident that they will complete their academic projects on time, and politicians are often overconfident about the decisions they make

Overconfidence about Completing Projects on Time

Are you surprised to learn that students are frequently overly optimistic about how quickly they can complete a project? In reality, this overconfidence applies to most people. Even Daniel Kahneman (2011) describes examples of his own failure in completing projects on time. According to the planning fallacy, people typically underestimate the amount of time (or money) required to complete a project; they also estimate that the task will be relatively easy to complete (Buehler et al., 2002; Buehler et al., 2012; Kahneman, 2011; Peetz et al., 2010; Sanna et al., 2009). Notice why this fallacy is related to overconfidence. Suppose that you are overconfident when you make decisions. You will then estimate that your paper for cognitive psychology will take only 10 hours to complete, and you can easily finish it on time if you start next Tuesday. Researchers certainly have not discovered a method for eliminating the planning fallacy. However, research suggests several strategies that can help you make more realistic estimates about the amount of time a large project will require 1. Divide your project into several parts, and estimate how long each part will take. This process will provide a more realistic estimate of the time you will need to complete the project (Forsyth & Burt, 2008). 2. Envision each step in the process of completing your project, such as gathering the materials, organizing the project's basic structure, and so forth. Each day, rehearse these components (Taylor et al., 1998). 3. Try thinking about some person other than yourself, and visualize how long this person took to complete the project; be sure to visualize the potential obstacles in your imagery (Buehler et al., 2012). The planning fallacy has been replicated in several studies in the United States, Canada, and Japan. How can we explain people's overconfidence that they will complete a task on time? One factor is that people create an optimistic scenario that represents the ideal way in which they will make progress on a project. This scenario fails to consider the large number of problems that can arise (Buehler et al., 2002). People also recall that they completed similar tasks relatively quickly in the past (Roy & Christenfeld, 2007; Roy et al., 2005). In addition, they estimate that they will have more free time in the future, compared to the free time they have right now (Zauberman & Lynch, 2005). In other words, people use the anchoring and adjustment heuristic, and they do not make large enough adjustments to their original scenario, based on other useful information

Framing Effect

As I was writing this chapter, I took a break to read the mail that had just arrived. I opened an envelope from an organization I support, called "The Feminist Majority." The letter pointed out that in a previous year, right-wing organizations had introduced legislation in 17 state governments that would eliminate affirmative action programs for women and people of color. This figure surprised and saddened me; apparently the antiaffirmative action supporters had more influence than I had imagined! And then I realized that the framing effect might be operating. Perhaps, at that very moment, other people throughout the United States were opening their mail from organizations that endorsed the other perspective. Perhaps, their letter pointed out that their organization—and others with a similar viewpoint—had failed to introduce legislation in 33 state governments. Yes, a fairly subtle change in the wording of a sentence can produce a very different emotional reaction! Are political organizations perhaps hiring cognitive psychologists? The framing effect demonstrates that the outcome of your decision can be influenced by two factors: (1) the background context of the choice and (2) the way in which a question is worded—or, framed (LeBoeuf & Shafir, 2012; McGraw et al., 2010). However, before we discuss these two factors, be sure you have tried Demonstration 12.7, which appears below Take a moment to read Demonstration 12.7 once more. Notice that the amount of money is $20 in both cases. If decision makers were perfectly "rational," they would respond identically to both problems (Kahneman, 2011; LeBoef & Shafir, 2012; Moran & Ritov, 2011). However, the decision frame differs for these two situations, so they seem psychologically different from each other. We frequently organize our mental expense accounts according to topics. Specifically, we view going to a concert as a transaction in which the cost of the ticket is exchanged for the experience of seeing a concert. If you buy another ticket, the cost of seeing that concert has increased to a level that many people find unacceptable. When Kahneman and Tversky (1984) asked people what they would do in the case of Problem 1, only 46% said that they would pay for another ticket. In contrast, in Problem 2, people did not tally the lost $20 bill in the same account as the cost of a ticket. In this second case, people viewed the lost $20 as being generally irrelevant to the ticket. In Kahneman and Tversky's (1984) study, 88% of the participants said that they would purchase the ticket in Problem 2. In other words, the background information provides different frames for the two problems, and the specific frame strongly influences the decision.

Recency and Availability

As you know from Chapters 4-6, your memory is better for items that you've recently seen, compared to items you saw long ago. In other words, those more recent items are more available. As a result, we judge recent items to be more likely than they really are. For example, take yourself back to the fall of 2011. Several university coaches and administrators had been fired following the discovery that young boys had been sexually abused (e.g., Bartlett, 2011; Bazerman & Tenbrunsel, 2011). If you had been asked to estimate the frequency of these crimes—and the cover-ups—you probably would have provided a high estimate. Research on the availability heuristic has important implications for clinical psychology. Consider a study by MacLeod and Campbell (1992), who encouraged one group of people to recall pleasant events from their past. These individuals later judged pleasant events to be more likely in their future. The researchers also encouraged another group to recall unpleasant events. These individuals later judged unpleasant events to be more likely in their future. Psychotherapists might encourage depressed clients to envision a more hopeful future by having them recall and focus on previous pleasant events.

Confirmation Bias

Be sure to try Demonstration 12.2 (above) before you read any further. Peter Wason's (1968) selection task has inspired more psychological research than any other deductive reasoning problem. It has also raised many questions about whether humans are basically rational (Mercier & Sperber, 2011; Lilienfeld et al., 2009; Oswald & Grosjean, 2004). Let's first examine the original version of the selection task and then see how people typically perform better on a more concrete variation of this task

The Conjunction Fallacy and Representativeness

Be sure to try Demonstration 12.4 before you read further. Now inspect your answers, and compare which of these two choices you ranked more likely: (1) Linda is a bank teller or (2) Linda is a bank teller and is active in the feminist movement. Tversky and Kahneman (1983) presented the "Linda" problem and another similar problem to three groups of people. One was a "statistically naïve" group of undergraduates. The "intermediate-knowledge" group consisted of first-year graduate students who had taken one or more courses in statistics. The "statistically sophisticated" group consisted of doctoral students in a decision science program who had taken several advanced courses in statistics. In each case, the participants were asked to rank all eight statements according to their probability, with the rank of 1 assigned to the most likely statement. Figure 12.1 shows the average rank for each of the three groups for the two critical statements: (1) "Linda is a bank teller" and (2) "Linda is a bank teller and is active in the feminist movement." Notice that the people in all three groups believed—incorrectly—that the second statement would be more likely than the first. Think for a moment about why this conclusion is mathematically impossible. According to the conjunction rule, the probability of the conjunction of two events cannot be larger than the probability of either of its constituent events (Newell et al., 2007). In the Linda problem, the conjunction of the two events—bank teller and feminist—cannot occur more often than either event by itself. Consider another situation where the conjunction rule operates: The number of murders last year in Detroit cannot be greater than the number of murders last year in Michigan (Kahneman & Frederick, 2005). As we saw earlier in this section, representativeness is such a powerful heuristic that people may ignore useful statistical information, such as sample size and base rate. Apparently, they also ignore the mathematical implications of the conjunction rule (Kahneman, 2011; Kahneman & Frederick, 2005). Specifically, when most people try the "Linda problem," they commit the conjunction fallacy. When people commit the conjunction fallacy, they judge the probability of the conjunction of two events to be greater than the probability of either constituent event. Tversky and Kahneman (1983) traced the conjunction fallacy to the representativeness heuristic. They argued that people judge the conjunction of "bank teller" and "feminist" to be more likely than the simple event "bank teller." After all, "feminist" is a characteristic that is very representative of (i.e., similar to) someone who is single, outspoken, bright, a philosophy major, concerned about social justice, and an antinuclear activist. A person with these characteristics doesn't seem likely to become a bank teller, but seems instead highly likely to be a feminist. By adding the extra detail of "feminist" to "bank teller," the description seems more representative and also more plausible—even though this description is statistically less likely (Swoyer, 2002). Psychologists are intrigued with the conjunction fallacy, especially because it demonstrates that people can ignore one of the most basic principles of probability theory. Furthermore, research by Keith Stanovich (2011) shows that college students with high SAT scores are actually more likely than other students to demonstrate this conjunction fallacy. The results for the conjunction fallacy have been replicated many times, with generally consistent findings (Fisk, 2004; Kahneman & Frederick, 2005; Stanovich, 2009). For example, the probability of "spilling hot coffee" seems greater than the probability of "spilling coffee" (Moldoveanu & Langer, 2002). . .until you identify the conjunction fallacy.

Overview of Conditional Reasoning

Conditional reasoning situations occur frequently in our daily life. However, these reasoning tasks are surprisingly difficult to solve correctly (Evans, 2004; Johnson-Laird, 2011). Let's examine the formal principles that have been devised for solving these tasks correctly. Table 12.1 illustrates propositional calculus, which is a system for categorizing the four kinds of reasoning used in analyzing propositions or statements. Let's first introduce some basic terminology. The word antecedent refers to the first proposition or statement; the antecedent is contained in the "if. . ." part of the sentence. The word consequent refers to the proposition that comes second; it is the consequence. The consequent is contained in the "then. . ." part of the sentence. When we work on a conditional reasoning task, we can perform two possible actions: (1) We can affirm part of the sentence, saying that it is true; or (2) we can deny part of the sentence, saying that it is false. By combining the two parts of the sentence with these two actions, we have four conditional reasoning situations. As you can see, two of them are valid, and two of them are invalid 1. Affirming the antecedent means that you say that the "if. . ." part of the sentence is true. As shown in the upper-left corner of Table 12.1, this kind of reasoning leads to a valid, or correct, conclusion. 2. The fallacy (or error) of affirming the consequent means that you say that the "then. . ." part of the sentence is true. This kind of reasoning leads to an invalid conclusion. Notice the upper-right corner of Table 12.1; the conclusion "This is an apple" is incorrect. After all, the item could be a pear, or a mango, or numerous other kinds of nonapple fruit. 3. The fallacy of denying the antecedent means that you say that the "if. . ." part of the sentence is false. Denying the antecedent also leads to an invalid conclusion, as you can see from the lower-left corner of Table 12.1. Again, the item could be some fruit other than an apple. 4. Denying the consequent means that you say that the "then. . ." part of the sentence is false. In the lower-right corner of Table 12.1, notice that this kind of reasoning leads to a correct conclusion.*

Decision Making II: Applications of Decision Making Research

Decision making is an interdisciplinary field that includes research in all the social sciences, including psychology, economics, political science, and sociology (LeBoeuf & Shafir, 2012; Mosier & Fischer, 2011). It also includes other areas such as statistics, philosophy, medicine, education, and law (Reif, 2008; Mosier & Fischer, 2011; Schoenfeld, 2011). Within the discipline of psychology, decision making inspires numerous books and articles each year. For example, many books provide a general overview of decision making (e.g., Bazerman & Tenbrunsel, 2011; Bennett & Gibson, 2006; Hallinan, 2009; Herbert, 2010; Holyoak & Morrison, 2012; Kahneman, 2011; Kida, 2006; Lehrer, 2009; Schoenfeld, 2011; Stanovich, 2009, 2011). Other recent books consider decision-making approaches, such as critical thinking (Levy, 2010). And, many other books consider decision making in specific areas, such as business (Bazerman & Tenbrunsel, 2011; Belsky & Gilovich, 1999; 2010; Henderson & Hooper, 2006; Mosier & Fischer, 2011; Useem, 2006); politics (Thaler & Sunstein, 2008; Weinberg, 2012); the neurological correlates of decision making (Delgado et al., 2011; Vartanian & Mandel, 2011); healthcare (Groopman, 2007; Mosier & Fischer, 2011); and education (Reif, 2008; Schoenfeld, 2011). In general, the research on decision making examines concrete, realistic scenarios, rather than the kind of abstract situations used in research on deductive reasoning. Research on decision making can be particularly useful with respect to helping us develop strategies to make better decisions in real-life situations. In this section, we focus more squarely on the applied nature of decision making research

The Standard Wason Selection Task

Demonstration 12.2 shows the original version of the selection task. Peter Wason (1968) found that people show a confirmation bias; they would rather try to confirm or support a hypothesis than try to disprove it (Kida, 2006; Krizan & Windschitl, 2007; Levy, 2010). When people try this classical selection task, they typically choose to turn over the E card (Mercier & Sperber, 2011; Oaksford & Chater, 1994). This strategy allows the participants to confirm the hypothesis by the valid method of affirming the antecedent, because this card has a vowel on it. If this E card has an even number on the other side, then the rule is correct. If the number is odd, then the rule is incorrect. As discussed above, the other valid method in deductive reasoning is to deny the consequent. To accomplish this goal, you must choose to turn over the 7 card. The information about the other side of the 7 card is very valuable. In fact, it is just as valuable as the information about the other side of the E card. Remember that the rule is: "If a card has a vowel on its letter side, then it has an even number on its number side." To deny the consequent in this Wason Task, we need to check out a card that does not have an even number on its number side. In this case, then, we must check out the 7 card. We noted that many people are eager to affirm the antecedent. In contrast, they are reluctant to deny the consequent by searching for counterexamples. This approach would be a smart strategy for rejecting a hypothesis, but people seldom choose this appropriate strategy (Lilienfield et al., 2009; Oaksford & Chater, 1994). Keep in mind that most participants in these selection-task studies are college students, so they should be able to master an abstract task (Evans, 2005).You may wonder why we did not need to check on the J and the 6. Take a moment to read the rule again. Actually, the rule did not say anything about consonants, such as J. The other side of the J could show an odd number, an even number, or even a Vermeer painting, and we wouldn't care. A review of the literature showed that most people appropriately avoid the J card (Oaksford & Chater, 1994). The rule also does not specify what must appear on the other side of the even numbers, such as 6. However, most people select the 6 card to turn over (Oaksford & Chater, 1994). People often assume that the two parts of the rule can be switched, so that it reads, "If a card has an even number on its number side, then it has a vowel on its letter side." Thus, they make an error by choosing the 6. This preference for confirming a hypothesis—rather than disproving it—corresponds to Theme 3 of this book. On the Wason selection task, we see that people who are given a choice would rather know what something is than what it is not.

Research on the Anchoring and Adjustment Heuristic

Demonstration 12.5 illustrates the anchoring and adjustment heuristic. In a classic study, high school students were asked to estimate the answers to these two multiplication problems (Tversky & Kahneman, 1982). The students were allowed only five seconds to respond. The results showed that the two problems generated widely different answers. If the first number in this sequence was 8, a relatively large number, the median of their estimates was 2,250. (i.e., half the students estimated higher than 2,250, and half estimated lower.) In contrast, if the first number was 1, a small number, their median estimate was only 512. Furthermore, both groups anchored too heavily on the initial impression that every number in the problem was only a single digit, because both estimates were far too low. The correct answer for both problems is 40,320. Did the anchoring and adjustment heuristic influence the people you tested? The anchoring and adjustment heuristic is so powerful that it operates even when the anchor is obviously arbitrary or impossibly extreme, such as a person living to the age of 140. It also operates for both novices and experts (Herbert, 2010; Kahneman, 2011; Mussweiler et al., 2004; Tversky & Kahneman, 1974). Researchers have not developed precise explanations for the anchoring and adjustment heuristic. However, one likely mechanism is that the anchor restricts the search for relevant information in memory. Specifically, people concentrate their search on information relatively close to the anchor, even if this anchor is not a realistic number (Kahneman, 2011; Pohl et al., 2003). The anchoring and adjustment heuristic has many applications in everyday life (Janiszewski, 2011; Mussweiler et al., 2004; Newell et al., 2007). For example, Englich and Mussweiler (2001) studied anchoring effects in courtroom sentencing. Trial judges with an average of 15 years of experience listened to a typical legal case. The role of the prosecutor was played by a person who was introduced as a computer science student. This student was obviously a novice in terms of legal experience, so the judges should not take him seriously. However, when the "prosecutor" demanded a sentence of 12 months, these experienced judges recommended 28 months. In contrast, when the "prosecutor" demanded a sentence of 34 months, these judges recommended a sentence of 36 months.

Explanations for the Hindsight Bias

Despite all the research, the explanations for the hindsight bias are not clear (Hardt et al., 2010; Pohl, 2004). However, one likely cognitive explanation is that people might use anchoring and adjustment. After all, they have been told that a particular outcome actually did happen—that it was 100% certain. Therefore, they use this 100% value as the anchor in estimating the likelihood that they would have predicted the answer, and then they do not adjust their certainty downward as much as they should. We also noted in discussing Carli's (1999) study that people may misremember past events, so that those events are consistent with current information. These events help to justify the outcome. Did the results of Carli's study about the tragic ending versus the upbeat story ending surprise anyone? Of course not. . .we knew it all along.

Overconfidence in Political Decision Making

Even powerful politicians can make unwise personal decisions, as we have recently seen with elected officials in the United States. Let's consider the decisions that politicians make about international policy— decisions that can affect thousands of people. Unfortunately, political leaders seldom think systematically about the risks involved in important decisions. For instance, they often fail to consider the risks involved when they (1) invade another country, (2) continue a war that they cannot win, and (3) leave the other country in a worse situation following the war. In an international conflict, each side tends to overestimate its own chances of success (Johnson, 2004; Kahneman & Renshon, 2007; Kahneman & Tversky, 1995). When politicians need to make a decision, they are also overconfident that their data are accurate (Moore & Healy, 2008). For example, the United States went to war with Iraq because our political leaders were overconfident that Iraq had owned weapons of mass destruction. For instance, Vice President Dick Cheney had stated on August 26, 2002, "There is no doubt that Saddam Hussein now has weapons of mass destruction." President George W. Bush had declared on March 17, 2003, "Intelligence gathered by this and other governments leaves no doubt that the Iraq regime continues to possess and conceal some of the most lethal weapons ever devised." It then became progressively more clear, however, that crucial information had been a forgery, and that these weapons did not exist (Tavris & Aronson, 2007). Researchers have created methods for reducing overconfidence about decisions. For example, the crystal-ball technique asks decision makers to imagine that a completely accurate crystal ball has determined that their favored hypothesis is actually incorrect; the decision makers must therefore search for alternative explanations for the outcome (Cannon-Bowers & Salas, 1998; Paris et al., 2000). They must also find reasonable evidence to support these alternative explanations. If the Bush administration had used the crystal-ball technique, for example, they would have been instructed to describe several reasons why Saddam Hussein could not have weapons of mass destruction. Unfortunately, political leaders apparently do not use de-biasing techniques to make important political decisions. As Griffin and Tversky (2002) point out:It can be argued that people's willingness to engage in military, legal, and other costly battles would be reduced if they had a more realistic assessment of their chances of success. We doubt that the benefits of overconfidence outweigh its costs. (p. 249

Representativeness Heuristic

Here's a remarkable coincidence: Three early U.S. presidents—Adams, Jefferson, and Monroe—all died on the Fourth of July, although in different years (Myers, 2002). This information doesn't seem correct, because the dates should be randomly scattered throughout the 365 days a year.You've probably discovered some personal coincidences in your own life. For example, one afternoon, I was searching for some resources on political decision making, and I found two relevant books. While recording the citations, I noticed an amazing coincidence: One was published by Stanford University Press, and the other by the University of Michigan Press. As it happened, I had earned my bachelor's degree from Stanford and my PhD from the University of Michigan. Now consider this example. Suppose that you have a regular penny with one head (H) and one tail (T), and you toss it six times. Which outcome seems most likely, T H H T H T or H H H T T T? Most people choose T H H T H T (Teigen, 2004). After all, you know that coin tossing should produce heads and tails in random order, and the order T H H T H T looks much more random. A sample looks representative if it is similar in important characteristics to the population from which it was selected. For instance, if a sample was selected by a random process, then that sample must look random in order for people to say that it looks representative. Thus, T H H T H T is a sample that looks representative because it has an equal number of heads and tails (which would be the case in random coin tosses). Furthermore, T H H T H T looks more representative because the order of the Ts and Hs looks random rather than orderly. The research shows that we often use the representativeness heuristic; we judge that a sample is likely if it is similar to the population from which this sample was selected (Kahneman, 2011; Kahneman & Tversky, 1972; Levy, 2010). According to the representativeness heuristic, we believe that random-looking outcomes are more likely than orderly outcomes. Suppose, for example, that a cashier adds up your grocery bill, and the total is $21.97. This very random-looking outcome is a representative kind of answer, and so it looks "normal." However, suppose that the total bill is $22.22. This total does not look random, and you might even decide to check the arithmetic. After all, addition is a process that should yield a random-looking outcome. In reality, though, a random process occasionally produces an outcome that looks nonrandom. In fact, chance alone can produce an orderly sum like $22.22, just as chance alone can produce an orderly pattern like the three presidents dying on the Fourth of July. The representativeness heuristic raises a major problem: This heuristic is so persuasive that people often ignore important statistical information that they should consider (Kahneman, 2011; Newell et al., 2007; Thaler & Sunstein, 2008). We see that two especially useful statistics are the sample size and the base rate. In addition, people have trouble thinking about the probability of two combined characteristics.

Now test yourself on the four kinds of conditional reasoning tasks by trying Demonstration 12.1.

Let's now reconsider the "affirming the consequent" task in more detail, because this task causes the largest number of errors (Byrne & Johnson-Laird, 2009). It's easy to see why people are tempted to affirm the consequent. In real life, we are likely to be correct when we make this kind of conclusion (Evans, 2000). For example, consider the two propositions, "If a person is a talented singer, then he or she has musical abilities" and "Paula has musical abilities." In reality, it's often a good bet that Paula is a talented singer. However, in logical reasoning, we cannot rely on statements such as, "It's a good bet that.. . ." For example, I remember a student whose musical skills as a violinist were exceptional, yet she sang off-key. As Theme 2 emphasizes, many cognitive errors can be traced to a heuristic, a general strategy that usually works well. In this example of logical reasoning, however, "it's a good bet" is not the same as "always" (Leighton & Sternberg, 2003). In the second part of this chapter, you'll see that decision-making tasks actually do allow us to use the concept, "it's a good bet." However, propositional reasoning tasks require us to use the word "always" before we conclude that the conclusion is valid. Still, many people do manage to solve these reasoning tasks correctly. How do they succeed? When contemporary psychologists study reasoning and decision making, they may adopt a dual-process theory, which distinguishes between two types of cognitive processing (De Neys & Goel, 2011; Evans, 2006, 2012; Kahneman, 2011; Stanovich, 2009, 2011). In general, Type 1 processing is fast and automatic; it requires little conscious attention. For example, we use Type 1 processing during depth perception, recognition of facial expression, and automatic stereotyping. In contrast, Type 2 processing is relatively slow and controlled. It requires focused attention, and it is typically more accurate. For example, we use Type 2 processing when we think of exceptions to a general rule, when we realize that we made a stereotyped response, and when we acknowledge that our Type 1 response may have been incorrect. With respect to conditional reasoning, people may initially use Type 1 processing, which is quick and generally correct. However, they sometimes pause and then shift to Type 2 processing, which requires a more effortful analytic approach. This approach requires focused attention and working memory so that people can realize that their initial conclusion would not necessarily be correct (De Neys & Goel, 2011; Evans, 2004, 2006; Kahneman, 2011; Stanovich, 2009, 2011). Our performance on reasoning tasks is a good example of Theme 4, which emphasizes that our cognitive processes are interrelated. For example, conditional reasoning relies upon working memory, especially the central-executive component of working memory that we discussed in Chapter 4 (Evans, 2006; Gilhooly, 2005; Reverberi et al., 2009). Reasoning also requires general knowledge and language skills (Rips, 2002; Schaeken et al., 2000; Wilhelm, 2005). In addition, it often uses mental imagery (Evans, 2002; Goodwin & Johnson-Laird, 2005). Let's examine these two topics, and then consider two cognitive tendencies that people demonstrate on these conditional reasoning tasks

Further Perspectives

How can we translate the confirmation bias into real-life experiences? Try noticing your own behavior when you are searching for evidence. Do you consistently look for information that will confirm that you are right, or do you valiantly pursue ways in which your conclusion can be wrong? The confirmation bias might sound relatively harmless. However, thousands of people die each year because our political leaders fall victim to this confirmation bias (Kida, 2006). For example, suppose that Country A wants to start a war in Country B. The leaders in Country A will keep seeking support for their position. These leaders will also avoid seeking information that their position may not be correct. Here's a remedy for the confirmation bias: Try to explain why another person might hold the opposite view (Lilienfeld et al., 2009; Myers, 2002). In an ideal world, for example, the leaders of Country A should sincerely try to construct arguments against attacking Country B This overview of conditional reasoning does not provide much evidence for Theme 2 of this book. At least in the psychology laboratory, people are not especially accurate when they try to solve "if. . .then. . ." kinds of problems. However, the circumstances are usually more favorable in our daily lives, where problems are more concrete and situations are more consistent with our belief biases (Mercier & Sperber, 2011). Deductive reasoning is such a challenging task that we are not as efficient and accurate as we are in perception and memory—two areas in which humans are generally very competent.

In everyday life, it's a good bet that this conclusion is incorrect; how could a feather possibly break a window?

However, in the world of logic, this feather-window task actually affirms the antecedent, so it must be correct. Similarly, your common sense may have encouraged you to decide that the conclusion was valid for the syllogism about the psychology majors who are concerned about poverty. The belief-bias effect occurs in reasoning when people make judgments based on prior beliefs and general knowledge, rather than on the rules of logic. In general, people make errors when the logic of a reasoning problem conflicts with their background knowledge (Dube et al., 2010, 2011; Levy, 2010; Markovits et al., 2009; Stanovich, 2011). The belief-bias effect is one more example of top-down processing (Theme 5). Our prior expectations help us to organize our experiences and understand the world. For example, when we see a conclusion in a reasoning task that looks correct in the "real world," we may not pay attention to the reasoning process that generated this conclusion (Stanovich, 2003). As a result, we may question a valid conclusion. People vary widely in their susceptibility to the belief-bias effect. For example, people with low scores on an intelligence test are especially likely to demonstrate the belief-bias effect (Macpherson & Stanovich, 2007). People are also likely to demonstrate the belief-bias effect if they have low scores on a test of flexible thinking (Stanovich, 1999; Stanovich & West, 1997, 1998). An inflexible person is likely to agree with statements such as, "No one can talk me out of something I know is right." In contrast, people who are flexible thinkers agree with statements such as, "People should always take into consideration any evidence that goes against their beliefs." These people are more likely to solve the reasoning problems correctly, without being distracted by the belief-bias effect. In fact, these people actively block their everyday knowledge, such as their knowledge that a feather could not break a window (Markovitz et al., 2009). In general, they also tend to carefully inspect a reasoning problem, trying to determine whether the logic is faulty (Macpherson & Stanovich, 2007; Markovitz et al., 2009). Fortunately, when students have been taught about the belief-bias effect, they make fewer errors (Kruglanski & Gigerenzer, 2011).

The Wording of a Question and the Framing Effect

In Chapter 11, we saw that people often fail to realize that two problems may share the same deep structure, for instance in algebra problems. In other words, people are distracted by the differences in the surface structure of the problems. When people make decisions, they are also distracted by differences in surface structure. For example, people who conduct surveys have found that the exact wording of a question can have a major effect on the answers that respondents provide (Bruine de Bruin, 2011). Tversky and Kahneman (1981) tested college students in both Canada and the United States, using Problem 1 in Demonstration 12.8. Notice that both choices emphasize the number of lives that would be saved. They found that 72% of their participants chose Program A, and only 28% chose Program B. Notice that the participants in this group were "risk averse." That is, they preferred the certainty of saving 200 lives, rather than the risky prospect of a one-in-three possibility of saving 600 lives. Notice, however, that the benefits of Programs A and B in Problem 1 are statistically identical. Now inspect your answer to Problem 2, in which both choices emphasize the number of lives that would be lost (i.e., the number of deaths). Tversky and Kahneman (1981) presented this problem to a different group of students from the same colleges that they had tested with Problem 1. Only 22% favored Program C, but 78% favored Program D. Here the participants were "risk taking"; they preferred the two-in-three chance that 600 would die, rather than the guaranteed death of 400 people. Again, however, the benefits of the two programs are statistically equal. Furthermore, notice that Problem 1 and Problem 2 have identical deep structures. The only difference is that the outcomes are described in Problem 1 in terms of the lives saved, but in Problem 2 in terms of the lives lost. The way that a question is framed—lives saved or lives lost—has an important effect on people's decisions (Hardman, 2009; Moran & Ritov, 2011; Stanovich, 2009). This framing changes people from focusing on the possible gains (lives saved) to focusing on the possible losses (lives lost). In the case of Problem 1, we tend to prefer the certainty of having 200 lives saved, so we avoid the option where it's possible that no lives will be saved. In the case of Problem 2, however, we tend to prefer the risk that nobody will die (even though there is a good chance that 600 will die); we avoid the option where 400 face certain death. Tversky and Kahneman (1981)

Decision Making I: Overview of Heuristics

In decision making, you must assess the information and choose among two or more alternatives. Compared to deductive reasoning, the area of decision making is much more ambiguous. Some information may be missing or contradictory. In addition, we do not have clear-cut rules that tell us how to proceed from the information to the conclusions. Also, you may never know whether your decision was correct, the consequences of that decision won't be immediately apparent, and you may need to take additional factors into account (Johnson-Laird et al., 2004; Simon et al., 2001). In real life, the uncertainty of decision making is more common than the certainty of deductive reasoning. However, people have difficulty with both kinds of tasks, and they do not always reach the appropriate conclusions (Goodwin & Johnson-Laird, 2005; Stanovich, 2009, 2011). When you engage in reasoning, you use the established rules of propositional calculus to draw clear-cut conclusions. In contrast, when you make a decision, there is no comparable list of rules. Furthermore, you may never even know whether your decision is correct. Some critical information may be missing, and you may suspect that other information is not accurate. Should you apply to graduate school or get a job after college? Should you take social psychology in the morning or in the afternoon? In addition, emotional factors frequently influence our everyday decision making (Kahneman, 2011; Lehrer, 2009; Stanovich, 2009, 2011). As you'll see, this section emphasizes several kinds of decision-making heuristics. Heuristics are general strategies that typically produce a correct solution. When we need to make a decision, we often use a heuristic that is simple, fast, and easy to access (Bazerman & Tenbrunsel, 2011; Kahneman, 2011; Kahneman & Frederick, 2005; Stanovich, 2009, 2011). These heuristics reduce the difficulty of making a decision (Shah & Oppenheimer, 2008). In many cases, however, humans fail to appreciate the limitations of these heuristics. When we use this fast, Type 1 processing, we can make inappropriate decisions. However, if we pause and shift to slow, Type 2 processing, we can correct that original error and end up with a good decision. Throughout this section, you will often see the names of two researchers, Daniel Kahneman and Amos Tversky. Kahneman won the Nobel Prize in Economics in 2002 for his research in decision making. Kahneman and Tversky proposed that a small number of heuristics guide human decision making. As they emphasized, the same strategies that normally guide us toward the correct decision may sometimes lead us astray (Kahneman, 2011; Kahneman & Frederick, 2002, 2005; Kahneman & Tversky, 1996). Notice that this heuristics approach is consistent with Theme 2 of this book: Our cognitive processes are usually efficient and accurate, and our mistakes can often be traced to a rational strategy. In this part of the chapter, we discuss many studies that illustrate errors in decision making. These errors should not lead us to conclude that humans are foolish creatures. Instead, people's decisionmaking heuristics are well adapted to handle a wide range of problems (Kahneman, 2011; Kahneman & Frederick, 2005; Kahneman & Tversky, 1996). However, these same heuristics become a liability when they are applied too broadly—for example, when we emphasize heuristics rather than other important information. We now explore three classic decision-making heuristics: representativeness, availability, and anchoring and adjustment. We conclude this section by considering the current status of heuristics in decisionmaking research.

Deductive Reasoning

In deductive reasoning, you begin with some specific premises that are generally true, and you need to judge whether those premises allow you to draw a particular conclusion, based on the principles of logic (Halpern, 2003; Johnson-Laird, 2005a; Levy, 2010). A deductive-reasoning task provides you with all the information you need to draw a conclusion. Furthermore, the premises are either true or false, and you must use the rules of formal logic in order to draw conclusions (Levy, 2010; Roberts & Newton, 2005; Wilhelm, 2005). One of the most common kinds of deductive reasoning tasks is called conditional reasoning. A conditional reasoning task (also called a propositional reasoning task) describes the relationship between conditions. Here's a typical conditional reasoning task: If a child is allergic to peanuts, then eating peanuts produces a breathing problem. A child has a breathing problem. Therefore, this child has eaten peanuts. Notice that this task tells us about the relationship between two conditions, such as the relationship between eating peanuts and a breathing problem. The kind of conditional reasoning we consider in this chapter explores reasoning tasks that have an "if. . .then. . ." kind of structure. When researchers study conditional reasoning, people judge whether the conclusion is valid or invalid. In the example above, the conclusion "Therefore, this child has eaten peanuts" is not valid, because some other substance or medical condition could have caused the problem. Another common kind of deductive reasoning task is called a syllogism. A syllogism consists of two statements that we must assume to be true, plus a conclusion. Syllogisms refer to quantities, so they use the words all, none, some, and other similar terms. Here's a typical syllogism Some psychology majors are friendly people. Some friendly people are concerned about poverty. Therefore, some psychology majors are concerned about poverty In a syllogism, you must judge whether the conclusion is valid, invalid, or indeterminate. In this example, the answer is indeterminate. In fact, those psychology majors who are friendly people and those friendly people who are concerned about poverty could really be two separate populations, with no overlap whatsoever. Notice that your everyday experience tempts you to conclude, "Yes, the conclusion is valid." After all, you know many psychology majors who are concerned about poverty. Many people would automatically respond, "valid conclusion." In contrast, with a little more explicit thinking, you'll reexamine that syllogism and realize that the strict rules of deductive reasoning require you to respond, "The conclusion is indeterminate" (Stanovich, 2009, 2011; Tsujii & Watanabe, 2009). In a college course in logic, you could spend an entire semester learning about the structure and solution of deductive reasoning tasks such as these. However, we emphasize the cognitive factors that influence deductive reasoning. Furthermore, we limit ourselves to conditional reasoning, a kind of deductive reasoning that students typically find more approachable than syllogisms (Schmidt & Thompson, 2008). As it happens, researchers have found that conditional reasoning tasks and syllogisms are influenced by similar cognitive factors (Mercier & Sperber, 2011; Schmidt & Thompson, 2008; Stanovich, 2011). In addition, people's performance on conditional reasoning tasks is correlated with their performance on syllogism tasks (Stanovich & West, 2000).In the remainder of this section, we first explore four basic kinds of conditional reasoning tasks before turning to a discussion of factors that cause difficulty in reasoning. We then conclude this section with a discussion of two cognitive errors that people often make when they solve these reasoning tasks

Concrete Versions of the Wason Selection Task

In most of the recent research on the Wason Task, psychologists focus on versions in which the numbers and letters on the cards are replaced by concrete situations that we encounter in our everyday lives. As you might guess, people perform much better when the task is concrete, familiar, and realistic (Evans, 2011; Mercier & Sperber, 2011). For example, Griggs and Cox (1982) tested college students in Florida using a variation of the selection task. This task focused on the drinking age, which was then 19 in the state of Florida. Specifically, the students were asked to test this rule: "If a person is drinking beer, then the person must be over 19 years of age" (p. 415). Each participant was instructed to choose two cards to turn over—out of four—in order to test whether people were lying about their age. Griggs and Cox (1982) found that 73% of the students who tried the drinking age problem made the correct selections, in contrast to 0% of the students who tried the standard, abstract form of the selection task. According to later research, people are especially likely to choose the correct answer when the wording of the selection task implies some kind of social contract designed to prevent people from cheating (Barrett & Kurzban, 2006; Cosmides & Tooby, 2006).

Belief-Bias Effect

In our lives outside the psychology laboratory, our background (or top-down) knowledge helps us function well. Inside the psychology laboratory—or in a course on logic—this background information sometimes encourages us to make mistakes. For example, try the following reasoning task (Markovits et al., 2009, p. 112): If a feather is thrown at a window, the window will break. A feather is thrown at a window. Therefore, the window will break.

Overconfidence about Decisions

In the previous section, we saw that decisions can be influenced by three decision-making heuristics: the representativeness heuristic, the availability heuristic, and the anchoring and adjustment heuristic. Furthermore, the framing effect—discussed in this section—demonstrates that both the background information and the wording of a statement can encourage us to make unwise decisions. Given these sources of error, people should realize that their decision-making skills are nothing to boast about. Unfortunately, however, the research shows that people are frequently overconfident (Kahneman, 2011; Krizan & Windschitl, 2007; Moore & Healy, 2008). Overconfidence means that your confidence judgments are higher than they should be based on your actual performance on the task. We have already discussed two examples of overconfidence in decision making in this chapter. In an illusory correlation, people are confident that two variables are related, when in fact the relationship is either weak or nonexistent. In anchoring and adjustment, people are so confident in their estimation abilities that they supply very narrow confidence intervals for these estimates. Let's now consider research on several aspects of overconfidence before considering several factors that help to create overconfidence

Hindsight Bias

People are overconfident about predicting events that will happen in the future. In contrast, hindsight refers to our judgments about events that already happened in the past. The hindsight bias occurs when an event has happened, and we say that the event had been inevitable; we had actually "known it all along" (Hastie & Dawes, 2010). In other words, the hindsight bias reflects our overconfidence that we could have accurately predicted a particular outcome at some point in the past (Hardt et al., 2010; Pezzo & Beckstead, 2008; Pohl, 2004; Sanna & Schwarz, 2006). The hindsight bias demonstrates that we often reconstruct the past so that it matches our present knowledge (Schacter, 2001). The hindsight bias can operate for the judgments we make about people. In a thought-provoking study, Linda Carli (1999) asked students to read a two-page story about a young woman named Barbara and her relationship with Jack, a man she had met in graduate school. The story, told from Barbara's viewpoint, provided background information about Barbara and her growing relationship with Jack. Half of the students read a version that had a tragic ending, in which Jack rapes Barbara. The other half read a version that was identical except that it had a happy ending, in which Jack proposes marriage to Barbara. After reading the story, each student then completed a true/false memory test. This test examined recall for the facts of the story, but it also included questions about information that had not been mentioned in the story. Some of these questions were consistent with a stereotyped version of a rape scenario, such as, "Barbara met many men at parties." Other questions were consistent with a marriage-proposal scenario, such as, "Barbara wanted a family very much." The results of Carli's (1999) study demonstrated the hindsight bias. People who read the version about the rape responded that they could have predicted Barbara would be raped. Furthermore, people who read the marriage-proposal version responded that they could have predicted Jack would propose to Barbara. (Remember that the two versions were actually identical, except for the final ending.) Furthermore, each group committed systematic errors on the memory test. Each group recalled items that were consistent with the ending they had read, even though this information had not appeared in the story. Carli's (1999) study is especially important because it helps us understand why many people "blame the victim" following a tragic event such as a rape. In reality, this person's earlier actions may have been perfectly appropriate. However, people often search the past for reasons why a victim deserved that outcome. As we've seen in Carli's research, people may even "reconstruct" some reasons that did not actually occur. The hindsight bias has been demonstrated in a number of different studies, though the effect is not always strong (e.g., Hardt et al., 2010; Harley et al., 2004; Kahneman, 2011; Koriat et al., 2006; Pohl, 2004), and has been documented in North America, Europe, Asia, and Australia (Pohl et al., 2002). Doctors show the hindsight bias when guessing a medical diagnosis (Kahneman, 2011), people demonstrate the hindsight bias for political events and for business decisions (Hardt et al., 2010; Kahneman, 2011), and the hindsight bias is stronger for individual who are experts in a particular domain (Knoll & Arkes, 2017). Furthermore, the hindsight bias varies as a function of psychological well-being. Groß, Blank, and Bayen (2017) found, for example, that depressed individuals viewed descriptions of events with negative outcomes as more foreseeable than events with positive outcomes.

Base Rate and Representativeness

Representativeness is such a compelling heuristic that people often ignore the base rate, or how often the item occurs in the population. Be sure you have tried Demonstration 12.3 before reading further. Using problems such as the ones in Demonstration 12.3, Kahneman and Tversky (1973) demonstrated that people rely on representativeness when they are asked to judge category membership. In other words, we focus on whether a description is representative of members of each category. When we emphasize representativeness, we commit the base-rate fallacy, paying too little attention to important information about base rate (Kahneman, 2011; Levy, 2010; Swinkels, 2003). If people pay appropriate attention to the base rate in this demonstration, they should select graduate programs that have a relatively high enrollment (base rate). These would include the two options "humanities and education" and "social science and social work." However, most students in this study used the representativeness heuristic, and they most frequently guessed that Tom W was a graduate student in either computer science or engineering (Kahneman, 2011; Kahneman & Tversky, 1973). The description of Tom W was highly similar to (i.e., representative of) the stereotype of a computer scientist or an engineer. You might argue, however, that the Tom W study was unfair. After all, the base rates of the various graduate programs were not even mentioned in the problem. Maybe the students failed to consider that there are more graduate students in the "social sciences and social work" category than in the "computer science" category. However, when Kahneman and Tversky's (1973) study included this base-rate information, most people ignored it. Instead, they judged mostly on the basis of representativeness. In fact, this description for Tom W is highly representative of our stereotype for students in computer science. As a result, people tend to select this particular answer. We should emphasize, however, that the representativeness heuristic—like all heuristics—frequently helps us make a correct decision (Levy, 2010; Newell et al., 2007; Shepperd & Koch, 2005). Heuristics are also relatively simple to use (Hogarth & Karelaia, 2007). In addition, some problems—and some alternative wording of problems—produce more accurate decisions (Gigerenzer, 1998; Shafir & LeBoeuf, 2002). Incidentally, research on this kind of "base-rate" task provides support for the dual-process approach. Specifically, different parts of the brain are activated when people use automatic, Type 1 processing, rather than slow, Type 2 processing (De Neys & Goel, 2011). Furthermore, training sessions can encourage students to use base-rate information appropriately (Krynski & Tenenbaum, 2007; Shepperd & Koch, 2005). Training would make people more aware that they should pause and use Type 2 processing to examine the question more carefully. You should also look out for other everyday examples of the base-rate fallacy. For instance, one study of pedestrians killed at intersections showed that 10% were killed when crossing at a signal that said "walk." In contrast, only 6% were killed when crossing at a signal that said "stop" (Poulton, 1994). So—for your own safety—should you cross the street only when the signal says "stop"? Now, compare the two base rates: Many more people cross the street when the signal says "walk."

Applications in Medicine

Several studies point out that the confirmation bias can be applied in medical situations. For example, researchers have studied people who seek medical advice for insomnia (Harvey & Tang, 2012). As it happens, when people believe that they have insomnia, they overestimate how long it takes them to fall asleep. They also underestimate the amount of time they spend sleeping at night. One explanation for these data is that people seek confirming evidence that they are indeed "bad sleepers," and they provide estimates that are consistent with this diagnosis. Another study focused on the diagnosis of psychological disorders (Mendel et al., 2011). Medical students and psychiatrists first read a case vignette about a 65-year-old man, and then they were instructed to provide a preliminary diagnosis of either Alzheimer's disease or severe depression. Each person then decided what kind of additional information they would like; six items were consistent with each of the two diagnoses. The results showed that 25% of the medical students and 13% of the psychiatrists selected only the information that was consistent with their original diagnosis. In other words, they did not investigate information that might be consistent with the other diagnosis.

Current Status of Heuristics and Decision Making

Some researchers have argued that the heuristic approach—developed by Kahneman and Tversky—may underestimate people's decision-making skills. For example, research by Adam Harris and his colleagues found that people make fairly realistic judgments about future events (Harris et al., 2009; Harris & Hahn, 2011). Gerd Gigerenzer and his colleagues agree that people are not perfectly rational decision makers, especially under time pressure. They emphasize that people can, however, do relatively well when they are given a fair chance on decision-making tasks. For instance, we saw that the recognition heuristic is reasonably accurate. Other research shows that people answer questions more accurately in naturalistic settings, especially if the questions focus on frequencies, rather than probabilities (e.g., Gigerenzer, 2006a, 2006b, 2008; Todd & Gigerenzer, 2007). Peter Todd and Gerd Gigerenzer (2007) devised a term called ecological rationality to describe how people create a wide variety of heuristics to help themselves make useful, adaptive decisions in the real world. For example, only 28% of U.S. residents become potential organ donors, in contrast to 99.9% of French residents. Gigerenzer (2008) suggests that both groups are using a simple default heuristic; specifically, if there is a standard option—which happens if people do nothing—then people will choose it. In the United States, you typically need to sign up to become an organ donor. Therefore, the majority of U.S. residents—using the default heuristic—remain in the nondonor category. In France, you are an organ donor unless you specifically opt out of the donor program. Therefore, the majority of French residents— using the default heuristic—remain in the donor category. Furthermore, people bring their world knowledge into the research laboratory, where researchers often design the tasks to specifically contradict their schemas. For example, do you really believe that Linda wouldn't be a feminist given her long-time commitment to social justice? The two approaches—one proposed by Kahneman and one by Gigerenzer—may seem fairly different. However, both approaches suggest that decision-making heuristics generally serve us well in the real world. Furthermore, we can become more effective decision makers by realizing the limitations of these important strategies (Kahneman & Tversky, 2000).

Factors That Cause Difficulty in Reasoning

The cognitive burden of deductive reasoning is especially heavy when some of the propositions contain negative terms (rather than just positive terms), and when people try to solve abstract reasoning tasks (rather than concrete terms). In the text that follows, we discuss research that highlights the effects of these factors on reasoning. Theme 3 of this book states that people can handle positive information better than negative information. As you may recall from Chapter 9, people have trouble processing sentences that contain words such as no or not. This same issue is also true for conditional reasoning tasks. For example, try the following reasoning task If today is not Friday, then we will not have a quiz today. We will not have a quiz today. Therefore, today is not Friday. This item has four instances of the word not, and it is more challenging than a similar but linguistically positive item that begins, "If today is Friday.. . ." Research shows that people take longer time to evaluate problems that contain linguistically negative information, and they are also more likely to make errors on these problems (Garnham & Oakhill, 1994; Halpern, 2003). A reasoning problem is especially likely to strain our working memory if the problem involves denying the antecedent or denying the consequent. Most of us squirm when we see a reasoning task that includes a statement like, "It is not true that today is not Friday." Furthermore, we often make errors when we translate either the initial statement or the conclusion into more accessible, linguistically positive forms. People also tend to be more accurate when they solve reasoning problems that use concrete examples about everyday categories, rather than abstract, theoretical examples. For instance, you probably worked through the items in Demonstration 12.1 somewhat easily. In contrast, even short reasoning problems are difficult if they refer to abstract items with abstract characteristics (Evans, 2004, 2005; Manktelow, 1999). For example, try this problem about geometric objects, and decide whether the conclusion is valid or invalid If an object is red, then it is rectangular. This object is not rectangular. Therefore, it is not red Now check the answer to this item, located at the bottom of Demonstration 12.2. Incidentally, the research shows that people's accuracy typically increases when they use diagrams to make the problem more concrete (Halpern, 2003). However, we often make errors on concrete reasoning tasks if our everyday knowledge overrides the principles of logic (Evans, 2011; Mercier & Sperber, 201

Familiarity and Availability

The familiarity of the examples—as well as their recency—can also produce a distortion in frequency estimation (Kahneman, 2011). Norman Brown and his colleagues conducted research on this topic in Canada, the United States, and China (Brown, Cui, & Gordon, 2002; Brown & Siegler, 1992). They discovered that the media can distort people's estimates of a country's population. Brown and Siegler (1992), for example, conducted a study during an era when El Salvador was frequently mentioned in the news because of U.S. intervention in Latin America. In contrast, Indonesia was seldom mentioned. Brown and Siegler found that the students' estimates for the population of these two countries were similar, even though the population of Indonesia was about 35 times as large as the population of El Salvador. The media can also influence viewers' ideas about the prevalence of different points of view. For instance, the media often give equal coverage to several thousand protesters and to several dozen counterprotesters. Notice whether you can spot the same tendency in current news broadcasts. Does the media coverage create our cognitive realities? How can we counteract Type 1 processing, which happens when we first encounter some information? Kahneman (2011) suggests that we can overcome that initial reaction by using critical thinking a shifting to Type 2 processing. For example, someone might analyze a friend's use of the availability heuristic and argue, "He underestimates the risks of indoor pollution because there are few media stories on them. That's an availability effect. He should look at the statistics." (p. 136)

Chapter Introduction

The topics of problem solving, deductive reasoning, and decision making are all interrelated. All three topics are included in the general category called "thinking." Thinking requires you to go beyond the information you were given. Thinking typically involves a goal such as a solution, a belief, or a decision. In other words, you begin with several pieces of information, and you must mentally transform that information so that you can solve a problem or make a decision. We covered problem solving in Chapter 11. In this chapter, we discuss deductive reasoning and decision making. Deductive Reasoning is a type of reasoning that begins with some specific premises, which are generally assumed to be true. Based on those premises, you judge whether they allow a particular conclusion to be drawn, as determined by the principles of logic. For example, suppose that a student named Jenna wants to enroll next semester in a course called "Biopsychology." The course description says, "To enroll in this course, students must have completed a course in research methods." However, Jenna has not completed this course; she plans to enroll in it next semester. Therefore, we draw the logical conclusion, "Jenna cannot enroll in Biopsychology next semester." During decision making, you assess information and choose among two or more alternatives. Many decisions are trivial: Do you want mustard on your sandwich? Other decisions are momentous: Should you apply to graduate programs for next year, or should you try to find a job?In this chapter, we first explore deductive reasoning, focusing heavily on a series of classic effects that have been studied, in order to unlock the general cognitive principles that govern our ability to deduce. In the following two sections, we cover the topic of decision making. We first consider a number of heuristics that guide the decision-making process. In our second section on decision making, we consider phenomena that have direct applications to decision making in our daily

Decision-Making Style and Psychological Well-Being

Think back to the last time you needed to buy something in a fairly large store. Let's say that you needed to buy a shirt. Did you carefully inspect every shirt that seemed to be the right size, and then reconsider the top contenders before buying the shirt? Maximizers are people who have a maximizing decision-making style; they tend to examine as many options as possible. The task becomes even more challenging as the number of options increases, leading to "choice overload" (Schwartz, 2009). In contrast, did you look through an assortment of shirts until you found one that was good enough to meet your standards, even if it wasn't the best possible shirt? Satisficers are people who have a satisficing decision-making style; they tend to settle for something that is satisfactory (Simon, 1955). Satisficers are not concerned about a potential shirt in another location that might be even better (Campitelli & Gobet, 2010; Schwartz, 2004, 2009). Now, before reading further, try Demonstration 12.9. Now look at your answers to Demonstration 12.9, and add up the total number of points. If your total is 65 or higher, you would tend toward the "maximizer" region of the scale. If your total is 40 or lower, you would tend toward the "satisficer" region of the scale. (Scores between 41 and 64 would be in the intermediate region.) Barry Schwartz and his coauthors (2002) administered the questionnaire in Demonstration 12.9 to a total of 1,747 individuals, including college students in the United States and Canada, as well as groups such as health-care professionals and people waiting at a train station. The researchers also administered several other measures. One of these assessed regret about past choices. It included such items as "When I think about how I'm doing in life, I often assess opportunities I have passed up" (p. 1182). Schwartz and his colleagues found a significant correlation r .52 between people's scores on the maximizing-satisficing scale and their score on the regret scale. Those who were maximizers tended to experience more regret. They blame themselves for picking a less-than-ideal item (Schwartz, 2009). The researchers also found a significant correlation r .34 between people's scores on the maximizing- satisficing scale and their score on a standard scale of depressive symptoms, the Beck Depression Inventory. The maximizers tended to experience more depression (Schwartz, 2004, 2009). Keep in mind that these data are correlational, so they do not necessarily demonstrate that a maximizing decision-making style actually causes depression. However, people seem to pay a price for their extremely careful decision-making style. They keep thinking about how their choice might not have been ideal, and so they experience regret. The research by Schwartz and his coauthors (2002) suggests that this regret contributes to a person's more generalized depression. An important conclusion from Schwartz's (2004) research is that having an abundance of choices certainly doesn't make the maximizers any happier. In fact, if they are relatively wealthy, they will need to make even more choices about their purchases, leading to even greater regret about the items that they did not buy. Schwartz (2009) chose a thought-provoking title for a recent chapter: "Be careful what you wish for: The dark side of freedom."

Estimating Confidence Intervals

We use anchoring and adjustment when we estimate a single number. We also use this heuristic when we estimate a confidence interval. A confidence interval is the range within which we expect a number to fall a certain percentage of the time. For example, you might guess that the 98% confidence interval for the number of students at a particular college is 3,000 to 5,000. This guess would mean that you think there is a 98% chance that the number is between 3,000 and 5,000, and only a 2% chance that the number is outside of this range. Demonstration 12.6 tested the accuracy of your estimates for various kinds of numerical information. Turn to the end of this chapter to see how many of your confidence-interval estimates included the correct answer. Suppose that a large number of people were instructed to provide a confidence interval for each of these 10 questions. Then, we would expect their confidence intervals to include the correct answer about 98% of the time, assuming that their estimation techniques had been correct. Studies have shown, however, that people provide 98% confidence intervals that actually include the correct answer only about 60% of the time (Block & Harper, 1991; Hoffrage, 2004). In other words, our estimates for these confidence intervals are definitely too narrow. The research by Tversky and Kahneman (1974) pointed out how the anchoring and adjustment heuristic is relevant when we make confidence-interval estimates. We first provide a best estimate, and we use this figure as an anchor. Next, we make adjustments upward and downward from this anchor to construct the confidence-interval estimate. However, our adjustments are typically too small. Consider, for example, Question 1 in Demonstration 12.6. Perhaps, you initially guessed that the United States currently has eight million full-time students in college. You might then say that your 98% confidence interval was between six million and 10 million. This interval would be too narrow, because you had made a large error in your original estimate. Check the correct answers at the end of this chapter. Again, we establish our anchor, and we do not wander far from it in the adjustment process (Kahneman, 2011; Kruglanski, 2004). When we shut our minds to new possibilities, we rely too heavily on top-down processing. An additional problem is that most people don't really understand confidence intervals. For instance, when you estimated the confidence intervals in Demonstration 12.6, did you emphasize to yourself that each confidence interval should be so wide that there was only a 2% chance of the actual number being either larger or smaller than this interval? Teigen and Jørgensen (2005) found that college students tend to misinterpret these confidence intervals. In their study, the students' 90% confidence intervals were associated with an actual certainty of only about 50%. You can overcome potential biases from the anchoring and adjustment heuristic. First, think carefully about your initial estimate. Then, ask yourself whether you are paying enough attention to the features of this specific situation that might require you to change your anchor, or else to make large adjustments away from your initial anchor

Sample Size and Representativeness

When we make a decision, representativeness is such a compelling heuristic that we often fail to pay attention to sample size. For example, Kahneman and Tversky (1972) asked college students to consider a hypothetical small hospital, where about 15 babies are born each day, and a hypothetical large hospital, where about 45 babies are born each day. Which hospital would be more likely to report that more than 60% of the babies on a given day would be boys, or would they both be equally likely to report more than 60% boys? The results showed that 56% of the students responded, "About the same." In other words, the majority of students thought that a large hospital and a small hospital were equally likely to report having at least 60% baby boys born on a given day. Thus, they ignored sample size. In reality, however, sample size is an important characteristic that you should consider whenever you make decisions. A large sample is statistically more likely to reflect the true proportions in a population. In contrast, a small sample will often reveal an extreme proportion (e.g., at least 60% baby boys). However, people are often unaware that deviations from a population proportion are more likely in these small samples (Newell et al., 2007; Teigen, 2004). In one of their first publications, Tversky and Kahneman (1971) pointed out that people often commit the small-sample fallacy because they assume that a small sample will be representative of the population from which it is selected (Poulton, 1994). Unfortunately, the small-sample fallacy leads us to incorrect decisions. We often commit the small-sample fallacy in social situations, as well as in relatively abstract statistics problems. For example, we may draw unwarranted stereotypes about a group of people on the basis of a small number of group members (Hamilton & Sherman, 1994). One effective way of combating inappropriate stereotypes is to become acquainted with a large number of people from the target group—for example, through exchange programs with groups of people from other countries.

Anchoring and Adjustment Heuristic

You've probably had a number of incidents like this one. A friend asks you, "Can you meet me at the library in 15 minutes?" You know that it takes longer than 15 minutes to get there, so you make a modest adjustment and agree to meet in 20 minutes. However, you didn't count on needing to find your coat, or your cell phone ringing, or stopping to tie a shoelace, or several other trivial events. Basically, you could have arrived in 20 minutes (well, maybe 25), if everything had gone smoothly. In retrospect, you failed to make large enough adjustments to account for the inevitable delays. (Try Demonstration 12.5 when it's convenient, but complete Demonstration 12.6 before you read further.) According to the anchoring and adjustment heuristic—also known as the anchoring effect—we begin with a first approximation, which serves as an anchor; then we make adjustments to that number based on additional information (Mussweiler et al., 2004; Thaler & Sunstein, 2008; Tversky & Kahneman, 1982). This heuristic often leads to a reasonable answer, just as the representativeness and availability heuristics often lead to reasonable answers. However, people typically rely too heavily on the anchor, such that their adjustments are too small (Kahneman, 2011). The anchoring and adjustment heuristic illustrates once more that people tend to endorse their current hypotheses or beliefs, rather than trying to question them (Baron, 2000; Kida, 2006). That is, they emphasize top-down processing, consistent with Theme 5. We've seen several other examples of this tendency in the present chapter: 1. The belief-bias effect: We rely too heavily on our established beliefs. 2. The confirmation bias: We prefer to confirm a current hypothesis, rather than to reject it. 3. The illusory correlation: We rely too strongly on one well-known cell in a 2 × 2 data matrix, and we fail to seek information about the other three cells Let's begin by considering some research on the anchoring and adjustment heuristic. Then, we will see how this heuristic can be applied to estimating confidence intervals


Kaugnay na mga set ng pag-aaral

chapter 2-5 intro to data analytics

View Set

Mastering Biology CH 17 homework

View Set

Chapter 33 Environmental Emergencies

View Set