chapter 4

Ace your homework & exams now with Quizwiz!

Upward Counterfactuals

A handy term for a counterfactual that is better than what actually happened is upward counterfactual, with "upward" denoting a better alternative than what happened. All the examples we have considered so far have involved the effects of upward counterfactuals. When bad things happen, people often generate such upward counterfactuals, and the more easily they do so, the worse the negative outcomes that actually occurred seem. So far we have also focused on how we react emotionally to the fortunes of others, but we also generate upward counterfactuals for our own less-than-desired outcomes: "If only I had studied harder"; "If only I hadn't had that last tequila shot"; "If only I had told her how much I care about her"; and so on. Upward counterfactuals generally make us feel worse about what actually happened. In particularly traumatic cases, for example, if a person causes a car accident by driving drunk, that individual may get caught in a recurring pattern of "if only I had" upward counterfactuals that fuel continued regret and guilt over the incident (Davis et al., 1995; Markman & Miller, 2006) Interestingly, though, studies (e.g., Gilovich & Medvec, 1994) have found that when older people look back over their lives, they tend not to regret actions they did but actions they didn't do: "If only I had gone back to school and gotten that master's degree"; "If only I had spent more quality time with my kids"; "If only I had asked Jessica out when I had the chance." A broad survey of Americans found that their regrets about inaction are most commonly about decisions in the domain of one's love life rather than in other aspects of their lives (Morrison & Roese, 2011). This may be something to keep in mind while you are young. But research suggests that one reason that we regret inactions is that we no longer recall the more concrete pressures and difficulties that kept us from taking those alternative courses of action. For example, when Tom Gilovich and colleagues (1993) asked current Cornell students how much they would be affected by adding a challenging course to their workload, the students focused on the negative impact, such as lower grades, less sleep, and less time for socializing. However, when they asked Cornell alumni how adding a challenging course would have affected them in a typical semester back in the day, the alumni thought the negative impact would have been minor. If upward counterfactuals—whether contrasted with things we did or with things we didn't do—tend to lead to such negative feelings about the past, why do people so commonly engage in them? Neal Roese and colleagues (e.g., Epstude & Roese, 2008; Roese, 1994) proposed that by making us consider what we could have done differently, upward counterfactuals serve an important function: They can provide insight into how to avoid a similar bad outcome in the future. Supporting this point, Roese found that students encouraged to think about how they could have done better on a past exam reported greater commitment to attending class and studying harder for future exams. Thus, although upward counterfactuals can make us feel worse about what transpired, they better prepare us to avoid similar ills in the future.

SOCIAL PSYCH AT THE MOVIES

According to the American Film Institute, the 1942 Hollywood classic Casablanca, directed by Michael Curtiz (1942), is the third greatest of all American films. Interestingly, the film is very much about the ways specific causal attributions affect people. It's the middle of World War II, and the Nazis are beginning to move in on Casablanca, the largest city in Morocco, a place where refugees from Nazi rule often come to make their way to neutral Portugal and then perhaps the United States. We meet the expatriate American Rick Blaine (Humphrey Bogart), the seemingly self-centered, cynical owner of Rick's Café Américain, where exit visas are often sold to refugees. A well-known anti-Nazi Czech freedom fighter, Victor Lazslo (Paul Henreid), and his wife, the Norwegian Ilsa Lund (Ingrid Bergman), arrive in Casablanca with the Nazis on their trail. Victor and Ilsa hope that Rick can obtain letters of transit to help get them out of Morocco, so they won't be captured by the Nazis. Rick is highly resistant and, it turns out, quite bitter toward Ilsa. We then see a flashback that explains why. We see that Rick was not self-centered or cynical a few years earlier; he was happy and optimistic. Ilsa and he had fallen in love in Paris and agreed to meet at a train scheduled to leave the city just days after the Nazis marched into the French capital. However, Ilsa instead sent him a note indicating that she could never see him again. So he left alone on the train in despair, thinking that she had betrayed him. This event and his (as it turns out) incorrect attribution for it made him defensively self-centered and cynical. As the movie progresses, Rick learns that Ilsa thought that her husband, Victor, had died trying to escape a concentration camp but found out that day in Paris that he was ill but alive and went to meet him. As Rick shifts his causal attribution for her not meeting him, his cynicism gradually lifts, and he ends up trying to help Victor escape. He arranges a meeting with his friend, the corrupt chief of police, Captain Louis Renault (Claude Rains), ostensibly to have Victor captured so he can have Ilsa for himself, a causal attribution quite plausible to Renault because of his own manipulative, womanizing ways. But Rick has a different reason for calling Renault to his café: to force him to arrange transport out of Casablanca for Victor. In case you haven't seen the film, we won't tell you how things turn out. The film illustrates the importance of causal attributions not only in altering Rick's outlook on life but in other ways as well. Victor suspects that Rick wasn't always dispositionally the politically indifferent, self-centered character he seems to be because he knows Rick had helped fight imperialism in Ethiopia and fascism in the Spanish Civil War. Victor therefore judges Rick's current behavior to be high in distinctiveness and thus doesn't attribute it to Rick's disposition. Rather, he comes to realize that Rick's resistance to helping him had to do with his feelings for Ilsa and what happened in Paris, not self-centered indifference. In one more example of attribution from the film, when Renault, an unrepentant gambler, is ordered by the Nazis to shut down Rick's café, in order to justify doing so, he uses a convenient external attribution: "I'm shocked, shocked to find that gambling is going on in here!" he exclaims as a croupier hands him his latest winnings.

Dispositional Attribution: A Three-Stage Model

Although the FAE is common and sometimes automatic, even people from individualistic cultures often take situational factors into account when explaining behavior. When might people consider these situational factors before jumping to an internal attribution? To answer this question, Dan Gilbert and colleagues (1988) proposed a model in which the attribution process occurs in a temporal sequence of three stages: A behavior is observed and labeled ("That was helpful behavior"). Observers automatically make a correspondent dispositional inference. If observers have sufficient accuracy motivation and cognitive resources available, they modify their attributions to take into account salient situational factors. This model predicts that people will be especially likely to ignore situational factors and to make the FAE when they have limited attention and energy to devote to attributional processing. Putting this model to the test, Gilbert and colleagues (1988) had participants watch a videotape of a very fidgety woman discussing various topics. The participants were asked to rate how anxious this person generally was. The videotape was silent, ostensibly to protect the woman's privacy, but participants were shown one- or two-word subtitles indicating the topics she was discussing. The videotape was always the same, but the subtitles indicated either very relaxing topics such as vacation, travel, and fashion or very anxiety-provoking topics such as sexual fantasies, personal failures, and secrets If observers have sufficient resources, they should initially jump to an internal attribution for the fidgety behavior and view the woman as anxious, but in the condition in which the topics are anxiety provoking, they should make a correction and view the woman as a less anxious person. Indeed, as FIGURE 4.5 shows, this is what happened under normal conditions. But half the observers were given a second, cognitively taxing, task to do while viewing the videotape. Specifically, they were asked to memorize the words in the subtitles displayed at the bottom of the screen. Under such a high cognitive load, these participants lacked the resources to correct for the situational factor (the embarrassing topics) and therefore judged the anxious-looking woman to be just as prone to anxiety if she was discussing sex and secrets as if she was discussing travel and fashion. The three-stage model of attribution also helps us to understand how individual differences in the motivation to focus on possible situational causes can influence the kind of attributions people make. For example, Skitka and colleagues (2002) found that liberals generally are less likely than conservatives to view an AIDS victim as an irresponsible person. However, when cognitively busy, the liberals viewed the AIDS patient as just as irresponsible as conservatives did. So when people have the motivation and resources, they often correct their initial leap to internal dispositional attributions, but when cognitively busy, they are less able to make this correction

How Fundamental Is the FAE?

Although the FAE is common when people make attributions for the behavior of others, this is not the case when making attributions for oneself. And if you think about visual salience, you should be able to (literally) see why. When we are observers, the other person is a salient part of our visual field, but when we ourselves are acting in the world, we are usually focused on our surroundings rather than on ourselves. This leads to what has been labeled the actor-observer effect: As observers, we are likely to make internal attributions for the behavior of others, but as actors, we are likely to make external attributions for our own behavior (Jones & Nisbett, 1971). When observing others, we attend primarily to them and not to their situation. In contrast, when acting ourselves, we are usually reacting to someone or something in our environment. When that is the case, such external factors are likely to be more salient. It is interesting that the actor-observer effect can be reversed by shifting the individual's visual perspective. For instance, Storms (1973) replicated the actor-observer effect by showing that when pairs of participants sat across from one another and had a conversation, they generally thought their partners were determining the things they talked about. They attributed the direction the conversation took to the person they could see—their partner. Then Storms demonstrated that if shown a video playback of the conversation from the discussion partner's perspective (now the actors were watching themselves talk), participants were more likely to think that they were the ones steering the direction of the conversation. There are, however, important qualifications to the actor-observer effect. For one, as actors, we are much more likely to make internal attributions for our successes ("I aced the test because I studied really hard") but external attributions for our failures ("I bombed the test because the instructor is so awful he could not teach someone to tie a shoe") (Campbell & Sedikides, 1999; Dorfman et al., 2019). We will discuss this tendency at greater length when we talk about self-esteem biases in chapter 6. Because of this tendency, the actor-observer difference is stronger for negative behaviors (Malle, 2006). In addition, research indicates that classic actor-observer asymmetries are largest when actors are not focused on a strong motive or intention for their behavior; if actors have a salient intention in mind, they tend to attribute their behavior internally to that intention (Malle et al., 2007).

Implicit Personality Theories

As intuitive psychologists trying to make sense of people's behavior, we develop our own theories about how different traits are related to each other. Consequently, one way we use our preexisting schemas to form impressions of a person is to rely on our implicit personality theories. These are theories that we have about which traits go together and why they do. To clarify what this means, we will consider some examples from research. Asch (1946) found that some traits are more central than others, and the more central traits affect our interpretation of other traits that we attribute to a person. Asch's participants were asked to consider two people with traits like those described in TABLE 4.1. Asch's participants viewed someone like Bob as generous, wise, happy, sociable, popular, and altruistic. They viewed someone like Jason as ungenerous, shrewd, unhappy, unsociable, irritable, and hard headed The only difference in the descriptions of the two guys was that warm was included in the traits for Bob and was replaced with cold for Jason. Yet changing that one trait—which is a metaphor, not a literal description of a person (Bob doesn't radiate more heat than Jason)—greatly altered the overall impressions of them. Combined with "warm," "intelligent" was viewed as "wise." Combined with "cold," "intelligent" was viewed as "shrewd." Warmth and coldness are therefore considered central traits that help organize overall impressions and transform interpretation of other traits ascribed to a person. When Asch replaced "warm" and "cold" with "polite" and "blunt," he did not find similar effects, suggesting that these are not central traits. While "polite" might not be perceived as especially diagnostic, recent research indicates that morality is a particularly central trait influencing our impressions of others (Goodwin, 2015). Independent of how "warm" people seem to be, how honest and trustworthy we think they are goes a long way in predicting whether we like or dislike them. This may be why morality is such a focus in political campaigns. For example, during the 2016 presidential campaign, then Republican nominee Donald Trump made this a centerpiece of his campaign against Democratic nominee Hillary Clinton, and she responded with her own allegations of immorality. Perceptions of trust are often gleaned quickly and have critical consequences. For example, in both Arkansas and Florida, convicted murderers who have more "untrustworthy" faces are more likely to get a death sentence (versus life in prison), even when trust is not judicially relevant (Wilson & Rule, 2015, 2016) Another implicit theory many people hold is that how someone behaves in one context is how the person behaves in others. For example, if you have a roommate who is neat and organized in your living quarters, you may infer that she is also a conscientious student or worker. However, it turns out that conscientiousness at home often does not extend to the classroom or job (Mischel & Peake, 1982). In short, we tend to assume that people are more consistent in how they behave across situations than they actually are. People also tend to view positive traits as going together and negative traits as going together. For example, people who are viewed as more physically attractive are also perceived to be more personable, happier, competent, and more successful (e.g., Oh et al., 2019), a finding that has come to be known as the "what is beautiful is good" stereotype (Dion et al., 1972). But this effect is most likely reflective of a broader halo effect (Nisbett & Wilson, 1977a; Thorndike, 1920), whereby social perceivers' assessments of an individual on a given trait are biased by their more general impression of the individual. If the general impression is good, then any individual assessment of the person's friendliness, attractiveness, intelligence, and so on is likely to be more positive. The same halo effects can negatively bias our perceptions of the people we dislike, but they tend to be stronger for positive information than for negative information (Gräf & Unkelbach, 2016).

School Performance and Causal Attribution

Attributional processes influence how we perceive ourselves. The psychologist Carol Dweck (1975) investigated the causal attributions that elementary school boys and girls made for their own poor performances in math courses. Whereas boys tended to attribute their difficulties to the unstable internal factor of their lack of effort or to external factors such as a bad teacher, girls tended to attribute their difficulties to a stable internal cause: lack of math ability. Dweck reasoned that girls were therefore likely to give up trying to get better at math. What's the point of trying if you simply don't have the ability? But of course, if you don't try, you won't succeed. Accordingly, Dweck and colleagues (1978) developed an attributional retraining program that encourages grade school children to attribute their failures to an internal but unstable factor: lack of sufficient effort. This attribution implies that one can improve by working harder, and, indeed, the researchers found that students doing so enjoyed substantial improvement in subsequent math performance.

Building an Impression from the Top Down: Perceiving Others Through Schemas

Bottom-up processes describe how we build our impressions of others by observing their individual actions and expressions and drawing inferences about who they are or what they are thinking. In contrast, people also often build an impression from the top down, based on their own preconceived ideas

The Misinformation Effect

Clearly our memories are biased by our schemas both when we first encode new information and when we later recall it. These biases can even lead us to remember things that didn't actually happen. The process is best captured by Elizabeth Loftus's work on the misinformation effect, a process in which cues given after an event can plant false information into memory. In a classic study by Loftus and colleagues (1978), all participants watched the same video depicting a car accident (FIGURE 4.1). After watching, some participants were asked, "How fast was the car going when it hit the other car?" Other participants were asked, "How fast was the car going when it smashed into the other car?" Participants asked the question with the word smashed estimated that the car was going faster than did participants who were asked the question with the word hit. Even a simple word such as smashed can prime a schema for a severe car accident that rewrites our memory of what we actually saw in the video. Perhaps more interesting, however, is what happened when participants were later asked if there was broken glass at the scene of the accident. Participants who had earlier been exposed to the word smashed were more than twice as likely to say "yes" as were participants previously exposed to the word hit (even though there was no broken glass at the scene). The way in which the question was asked created an expectation that led to people remembering something that actually was not there!

The Availability Heuristic and Ease of Retrieval

Clearly, the content of the memories people recall, whether true or false, greatly influences their judgments. But people's judgments can also be affected by how readily memories can be brought into consciousness. Try to make the following judgment as quickly as possible. Which of the two word fragments listed below could be completed by more words? (1) _ _ _ _ I N G or (2) _ _ _ _ _ N _ If your first inclination was to choose option 1, you just exhibited what is referred to as the availability heuristic (Tversky & Kahneman, 1973). This is our tendency to assume that information that comes easily to mind (or is readily available) is more frequent or common. It's relatively easy to think of four-letter words that you can add "ing" to, but it is more of a struggle to come up with seven-letter words with N in the sixth position. If we compare the relative difficulty in recalling such words, it's easy to conclude that option 1 could be completed with more words. Of course, on closer examination, you can readily see that option 2 has to be the correct answer. Any word that you could think of that would fit in option 1 would also fit in option 2, and some words, such as weekend, fit only option 2. The availability heuristic has the power to distort many of our judgments. When such attacks occur—for example, in Israel—they make the news. But actually, Israel has a very high fatality rate from car accidents, and when in Israel, you are far more likely to die in a car accident than in a suicide bombing. Similarly, people generally are more afraid of flying in an airplane than of driving their car, yet according to the National Safety Council (2020), the lifetime risk of dying in a motor vehicle accident is 1 in 103, whereas the lifetime risk of dying as an airplane passenger is 1 in 188,364. But every airplane accident attracts national media attention, whereas most car fatalities are barely covered at all. Because airplane crashes are so easily recalled, the availability heuristic makes it seem that they are more prevalent than they really are. If the media covered each fatal car accident with as much intensity, the airline industry would probably enjoy a large increase in ticket sales! Inspired by research on the availability heuristic, Norbert Schwarz and colleagues (Schwarz, Bless, Strack et al., 1991) discovered a related phenomenon known as the ease of retrieval effect. With the availability heuristic, people rely on what they can most readily retrieve from memory to judge the frequency of events. With the ease of retrieval effect, people judge how frequently an event occurs on the basis of how easily they can retrieve a certain number of instances of that event. To demonstrate this, Schwarz and colleagues asked college students to recall either 6 instances when they acted assertively or 12 instances when they acted assertively. You might expect that the more assertive behaviors you remember, the more assertive you feel. But the researchers found exactly the opposite pattern: Participants asked to recall 12 instances of assertiveness rated themselves as less assertive than those asked to recall only 6 instances. For most people, coming up with 12 distinct episodes is actually pretty difficult; it's much more difficult than recalling only 6 instances of assertiveness. People thus seem to make the following inference: If I'm finding it difficult to complete the task that is asked of me (recalling 12 acts of assertiveness), then I must not act assertively much, and so I must not be a very assertive person. People asked to recall 6 instances ended up thinking they were more assertive than participants asked to recall 12 instances. Some studies suggest that this ease of retrieval effect occurs only if the person puts considerable cognitive effort into trying to retrieve the requested number of instances of the behavior (e.g., Tormala et al., 2002). Only then do they attend to the ease or difficulty of retrieval and use it to assess how common the recalled behavior is (Weingarten & Hutchinson, 2018).

The More Easily We Can Mentally Undo an Event, the Stronger Our Reaction to It

Consider the following story, based on Kahneman and Tversky (1982), which we'll call Version A: Carmen always wanted to see the Acropolis, so after graduating from Temple University, with the help of a travel agent, she arranged to fly from Philadelphia to Athens. She originally booked a flight requiring her to switch planes in Paris, with a three-hour layover before her flight from Paris to Athens. But a few days before her departure date, her travel agent e-mailed her that a direct flight from Philly to Athens had become available. Carmen figured "Why not?" and so she switched to the direct flight. Unfortunately, her plane suffered engine failure and came down in the Mediterranean, leaving no survivors. How tragic would you judge this outcome for Carmen? Well, if you are like the students who participated in Kahneman and Tversky's classic study, you would say very tragic. But what if you read a Version B: Carmen always wanted to see the Acropolis, so after graduating from Temple University, with the help of a travel agent, she arranged to fly from Philadelphia to Athens. She booked a flight requiring her to switch planes in Paris, with a three-hour layover before her flight from Paris to Athens. Unfortunately, her plane from Paris to Athens suffered engine failure and came down in the Mediterranean, leaving no survivors. How tragic does that seem? People who read stories like Version B don't think they are nearly as tragic as do those who read stories like Version A. Given that the outcome is really the same in both versions—Carmen died young, without ever seeing the Acropolis—why does Version A seem more tragic? Kahneman and Tversky explained that it is because it is easier to generate a counterfactual with Version A; it's very easy to imagine a counterfactual in which Carmen made it safely to Greece: All she had to do was stick with her original flight plan! However, with Version B, no such obvious counterfactual is available. Rather, we would have to think for a while about ways her tragic death might have been avoided. The general principle is that if something bad happens, the easier it is to imagine how the bad outcome could have been avoided, the more tragic and sad the event seems. And, as this example illustrates, it is generally easier mentally to undo bad outcomes if they are caused by an unusual action, such as switching flight plans. Here's another example based on Kahneman and Tversky (1982) that illustrates the pervasive influence of counterfactual thinking on emotional reactions. You go to a basketball game to see your favorite team play. In one version, your team gets trounced by 15 points. This would undoubtedly be upsetting. In the other version, your team loses on a last-second 50-foot buzzer beater. Now how upset would you be? More upset, right? Yet, in both cases, the pragmatic outcome is the same: Your team lost the game. So why is the close loss more agonizing? The close loss is more upsetting because it is much easier to imagine a counterfactual in which your team would have won the game: If only that desperation heave had clanked off the back of the rim! If the team lost by 15, it is much harder to mentally undo the loss, so it is less frustrating. In other words, it is easier to undo mentally the close loss than the not-so-close loss, just as it was easier to undo mentally the tragic fate of Carmen if she had changed flight plans than if she had simply taken the flight she had planned to take all along.

Awarding Damages

Counterfactual thinking has serious consequences in a variety of important areas of life. Consider the legal domain, where we'd like to think that jury decisions are based on rational consideration of the facts at hand. Miller and McFarland (1986) showed how this phenomenon of viewing the negative event that seems easier to undo as more unfortunate could influence trial outcomes. In one study, they described a case in which a man was injured during a robbery. Half the participants read that the injury occurred in a store the victim went to regularly, whereas the other half were told it occurred in a store the victim did not usually go to. All other details were identical, yet participants recommended over $100,000 more in compensation for the injury if the victim was injured at the store he rarely went to because the unfortunate injury was easier to mentally undo in this version: "If only he had gone to the store where he usually shops!" To summarize, negative outcomes resulting from unusual or almost avoided actions are easier to imagine having gone better and therefore arouse stronger negative emotional reactions.

Upward and Downward Counterfactuals and Personal Accomplishments: The Thrill of Victory, the Agony of Defeat

Counterfactuals affect how we feel about our own achievements. Subjective, emotional reactions of satisfaction or regret are not determined so much by what you did or did not accomplish as by the counterfactuals you generate about those outcomes. In one clever demonstration of this phenomenon, researchers asked participants to judge the happiness of athletes at the 1992 Barcelona Summer Olympics who had won either the silver or the bronze medal by watching silent videotapes of them at the awards ceremony (Medvec et al., 1995). The silver medal, which means the person was the second best in the world at the event, is obviously a greater achievement than the bronze medal, which means the person was the third best in the world. However, on the basis of an analysis of which counterfactuals are most likely for silver- and bronze-medal winners, the researchers predicted that the bronze-medal winners would actually be happier than the silver-medal winners. They reasoned that for silver-medal winners, the most salient counterfactual is likely to be the upward counterfactual that "If only I had gone X seconds faster, or trained a little harder, I could have won the ultimate prize, the gold medal!" In contrast, bronze-medal winners are likely to focus on the downward counterfactual that "If I hadn't edged out the fourth-place finisher, I would have gone home with no medal at all!" In support of this reasoning, Medvec and colleagues found that bronze-medal winners were rated as appearing happier than silver-medal winners on the awards stand. Furthermore, in televised interviews, bronze medalists were more likely to note that at least they received a medal, whereas the silver medalists were more likely to comment on how they could have done better.

False Consensus

Even when a person we meet doesn't remind us of someone specific, we often project onto him or her attitudes and opinions of the person we know the best—ourselves! False consensus is a general tendency to assume that other people share the attitudes, opinions, and preferences that we have (Mullen et al., 1985; Ross, Greene, & House, 1977). As we mentioned earlier, in our discussion of theory of mind, most of us can understand that other people do not view the world exactly as we do, and yet often we assume they do anyhow. People who are in favor of gun control think most people agree with them. People who are against gun control think they are in the majority. We are more likely to assume consensus among members of our ingroups than with members of outgroups (Krueger & Zeiger, 1993). After all, our ingroups are more likely to remind us of ourselves. In part because of this tendency, we often assume that, as a group, our friends are more similar to each other than people we don't like are, even though having more information about our friends suggests that we should recognize the differences between them (Alves et al., 2016). False consensus stems from a number of processes (Marks & Miller, 1987). Among them: Our own opinions and behaviors are most salient to us and, therefore, most cognitively accessible. So, they are most likely to come to mind when we consider what other people think and do. It is validating for our worldview and self-worth to believe that others think and act the way we do. So, when we feel under attack, we're motivated to think that others share our viewpoint and validate our actions (Sherman et al., 1984). Think of the teenager caught smoking behind the school who explains his actions by saying, "But everybody does it!" Research suggests that this isn't just a cliché; teenagers who engage in behaviors that might be bad for their health actually do overestimate the degree to which their friends are engaging in the same behaviors (Prinstein & Wang, 2005). We tend to like and associate with people who are in fact similar to us. If our group of friends really do like to smoke and drink, then in our own narrow slice of the world, it does seem as if everyone does it because we forget to adjust for the fact that the people we affiliate with are not very representative of the population at large. We see this happening on the Internet, where our group of Facebook friends or Twitter followers generally validate and share our opinions (Barberá et al., 2015), and the news outlets we seek out package news stories in ways that seem to confirm what we already believe. One of the benefits of having a diverse group of friends and acquaintances and a breadth of media exposure might be to disabuse us of our false consensus tendencies.

Three Kinds of Information: Consistency, Distinctiveness, and Consensus

Harold Kelley (1967) described three sources of information for arriving at a causal attribution when accuracy is important: consistency (across time), distinctiveness (across situations), and consensus (across people) (FIGURE 4.6). Imagine that a few years from now, Reese, an acquaintance of yours, goes to see the tenth reboot of the Spiderman movie franchise. Reese tells you, "You have to see this flick; it's an unprecedented cinematic achievement." By this time, it costs $30 to see a film, and you want to think hard before shelling out that kind of money. Ultimately, you want to know if the latest "new" Spiderman is indeed a must-see. Or was it something about Reese, or the particular circumstances when she saw the movie? In other words, why did Reese love the movie? Maybe there was a special circumstance when Reese saw the movie that led her to love it. If she then saw it a second time and didn't like it much, Kelley would label the outcome low in consistency, and you would probably conclude that there was something unique about the first time she saw it that prompted her reaction. Maybe she really needed a distraction the first time she saw it. This would be an attribution to an unstable factor that might be very different each time Reese saw the movie. However, if Reese saw the movie three times and loved it each time, Kelley would label the outcome high in consistency across time, and you would then be likely to entertain either a stable internal attribution to something about Reese or a stable external attribution to something about the movie. Two additional kinds of information would help you decide. First, you could consider Reese's typical reaction to other movies. If Reese always loves movies in general, or always loves superhero movies, then her reaction to this specific film is low in distinctiveness. This information would lead you to an internal attribution to something about Reese's taste in these kinds of movies rather than the quality of the movie. On the other hand, if Reese rarely likes movies, or rarely likes superhero movies, then her reaction is high in distinctiveness, which would lead you more toward an external attribution to the movie. Finally, how did other people react to the movie? If most others also loved the movie, there is high consensus, suggesting that something about the movie was responsible for Reese's reaction. However, if most other people didn't like the movie, you would be most likely to attribute Reese's reaction to something about her. In short, when a behavior is high in consistency, distinctiveness, and consensus, the attribution tends to be external to the stimulus in the situation, whereas when a behavior is high in consistency but low in distinctiveness and consensus, an internal attribution to the person is more likely. Research has generally supported Kelley's model. Recent findings confirm that when distinctiveness is low and consistency is high, observers tend to make trait attributions to the person (Olcaysoy Okten & Moskowitz, 2018) The presence of other potential causes also influences the weight that we assign to a particular causal factor. For example, some years ago, most of the baseball-watching world was in awe of players such as Barry Bonds and Alex Rodriguez, attributing their remarkable accomplishments to their incredible athletic ability and effort. However, when allegations of steroid use surfaced, many fans discounted the players' skills, attributing some of their accomplishments to an alternative causal factor: performance-enhancing drugs. This tendency is called the discounting principle, whereby the importance of any potential cause of another's behavior is reduced to the extent that other potential causes are salient (Kelley, 1971; Kruglanski et al., 1978).

Basic Dimensions of Causal Attribution

Heider (1958) observed that causal attributions vary on two basic dimensions. The first dimension is locus of causality, which can be either internal to some aspect of the person engaging in the action (known as the actor) or external to some factor in the person's environment (the situation). For example, if Justin failed his physics exam, you could attribute his poor performance to a lack of intelligence or effort, factors internal to Justin. Or you could attribute Justin's failure to external factors, such as a lousy physics professor or an unfair exam. The second basic dimension is stability: attributing behavior to either stable or unstable factors. If you attribute Justin's failure to a lack of physics ability, that's a stable internal attribution because people generally view ability as being relatively unchangeable. On the other hand, if you attribute Justin's failure to a lack of effort, that is an unstable internal attribution. You would still perceive Justin as being responsible for the failure, but you would recognize that his exertion of effort can vary from situation to situation. Stable attributions suggest that future outcomes in similar situations, such as the next physics test, are likely to be similar. In contrast, unstable attributions suggest that future outcomes could be quite different; if Justin failed because of a lack of effort, he might do a lot better on the next test if he exerted himself a bit more. External attributions can also be stable or unstable. If you attribute Justin's failure to a professor who always gives brutal tests or who is an incompetent teacher, you are likely to think that Justin will not do much better on the next test. But if you attribute Justin's failure to external unstable factors such as bad luck or the "love of his life" having broken up with him right before the test, then you're more likely to think Justin may improve on the next exam. As poor Justin's example suggests, how we attribute a behavior—to internal or external factors and stable or unstable factors—affects both the impressions we form of the actor and the predictions we make about the actor's future behavior. An internal attribution for a poor performance or a negative action reflects poorly on the actor, whereas an external attribution tends to let the actor off the hook. On the other hand, an internal attribution for a positive behavior generally leads to a positive impression of the actor, whereas an external attribution for a positive action undermines the benefit to the actor's image. Attributions to stable factors lead to strong expectations of similar behavior in similar situations, whereas attributions to unstable factors do not.

Automatic Processes in Causal Attribution

How do people arrive at a particular attribution for a behavior they observe? Like most other products of human cognition, causal attributions sometimes result from quick, intuitive, automatic processes and sometimes from more rational, elaborate, thoughtful processes. When you ask people why some social event occurred, they can usually give an opinion. However, research has shown that people often don't put much effort into thinking about causal attributions. People make a concentrated effort primarily when they encounter an event that is unexpected or important to them (Jaynes, 1976; Pyszczynski & Greenberg, 1981; Wong & Weiner, 1981). Such events are more likely to require some action on our part and are more likely to have a significant impact on our own lives, so it is more important to arrive at an accurate causal attribution. Imagine that when you were a child, your mom always had coffee in the morning, and you came into the kitchen one morning and saw your mom put on a pot of coffee. If a friend dropped by and asked you why your mom was doing that, you would readily respond, "She always does that" or "She loves coffee in the morning"—the same knowledge that led you to expect her to do exactly what she did. However, if one morning she was brewing a pot of herbal tea, this would be unexpected, and you'd likely wonder why she was doing that instead of brewing her usual coffee. You would be even more likely to think hard about a causal attribution for her behavior if one morning she was making herself a martini. Most events in our daily lives are expected; consequently, we don't engage in an elaborate process to determine causal attributions for them. According to Harold Kelley (1973), when an event readily fits an existing causal schema (a theory we hold about the likely cause of that specific kind of event), we rely on it rather than engage in much thought about why the event occurred. These causal schemas come from two primary sources. Some are based on our own personal experience, as in the mom-making-coffee example. Others are based on general cultural knowledge. If an American watches another person in a restroom passing a thin, waxed piece of thread between her teeth, little or no thought about why is generated because it is a culturally normative form of hygiene known as flossing. But consider our discussion in chapter 2 of cultural differences and think, from this cultural perspective, about how people garner prestige and self-worth. In the same way we might think little of someone flossing in a restroom, a traditional Trobriand Islander wouldn't blink an eye at a man building up a large pile of yams in front of his sister's house and leaving them to rot. In that culture, this behavior is a way that people enhance their status and would not require any explanation. But a Trobriander seeing someone flossing or an American seeing someone "yamming" would require a more elaborate process of determining attribution. When an event we observe isn't particularly unexpected or important to us but doesn't readily fit an obvious causal schema, we are likely to base our causal attribution on whatever plausible factor is either highly visually salient or highly accessible from memory. This "top of the head phenomenon" was illustrated in a set of studies by Shelley Taylor and Susan Fiske (1975), in which participants heard a group discussion at a table. One particular member of each group was made visually salient. One way the researchers accomplished this was by having only one member of the group be of a particular race or gender. Participants tended to think the visually salient member of the group had the largest effect on the discussion. For instance, when the group included only one woman, she was viewed as most causally responsible for the discussion.

Memory for Schema-Consistent and Schema-Inconsistent Information

In one study illustrating how schemas shape memory (Cohen, 1981), participants watched a videotape of a woman the researchers described as either a librarian or a waitress. The woman in the videotape noted that she liked beer and classical music. When later asked what they remembered about the woman, participants who believed she was a librarian were more likely to recall that she liked classical music. Those who believed she was a waitress were more likely to remember that she liked beer. Why the difference? Their schema of the woman led the participants to look for, and therefore tend to find and encode into long-term memory, characteristics she displayed that fit their schema of her. Participants exhibited such schema-consistent memory even when interviewed a week later. Although most of the time it's easier to remember information that is consistent with our schemas, sometimes information that is highly inconsistent with our schemas also can be very memorable (Hamilton et al., 1989). Such information often grabs our attention and forces us to think about how to make sense of it. If you go to a funeral and someone starts tap-dancing on the coffin, it will violate your script for such an event, and you'll probably never forget it. Thus, encountering information that conflicts with a preexisting schema can be quite memorable (Pyszczynski et al., 1987). This is especially true when we are motivated to make sense of schema-inconsistent information, and we have the cognitive resources to notice and think about it (Moskowitz, 2005). When we are very busy or unmotivated, we primarily attend to and encode in memory information that fits our currently activated schemas. Better memory for schema-consistent information is especially likely when we recall prior experience through the lens of a current schema. Consider two people, Frank and Mike, who each just started dating a new partner in September. Both report being moderately in love. Fast-forward two months, to November, and Mike's relationship has been doing quite well. Frank's relationship, in contrast, has turned sour, and he is having lukewarm feelings. How will their current feelings affect their memories of their initial attraction? Mike will tend to recall being more in love initially than will Frank, even though back in September they were equally in love with their partners (McFarland & Ross, 1987). Our present perceptions can create a schema that biases how we recall (or, actually, reconstruct) events from the past. Although a schema can color memory in either a positive or negative light, people have a general tendency to show a rosy recollection bias, remembering events more positively than they actually experienced them (Mitchell et al., 1997). This is especially true if people feel positively about their current experience. People also tend to exhibit mood-congruent memory. That is, we are more likely to remember positive information when we are in a positive mood and to remember negative information when in a negative mood. Shoppers, for instance, tend to recall more positive attributes of their cars and TVs when they have previously been given a free gift that sparks a good mood (Isen et al., 1978). Such mood-congruent memory effects help explain why depressed persons seem to have such difficulty extracting positive feelings from events they experienced in the past. Because they are typically in negative moods, they tend to recall more negative information from the past (Barnett & Gotlib, 1988). The cultural perspective, however, offers one caveat to this tendency to attend to and remember schema-consistent information. In individualistic cultures, people prefer to have well-defined concepts that are distinct from each other and stable over time, so consistency is highly valued. In collectivistic cultures, people tend to think of concepts, including other people, as varying over time and situation, and tolerance for inconsistency is greater. Theorists refer to this collectivistic preference as dialecticism—a way of thinking that acknowledges and accepts inconsistency (Spencer-Rodgers et al., 2010). These culturally based differences in ways of thinking, in turn, influence how people's memory biases construct stable and consistent schemas of the world and the people in it—including themselves. In one study, for example, when asked to recall aspects of themselves, Chinese participants were likely to remember aspects that implied more inconsistent self-descriptions than were European American participants (Spencer-Rodgers et al., 2009).

What Is Your Risk of Disease?

Ideally, people should objectively determine their estimated risk of disease by considering their actual risk factors. However, peoples' judgments about health risk are strongly influenced by many of the cognitive and motivational factors covered throughout this textbook. One factor is the ease with which people can recall information that makes them feel more or less vulnerable (Schwarz et al., 2016). The ease of retrieval effect occurs in judgments of health risks such as the risk of contracting HIV (e.g., Raghubir & Menon, 1998). The more easily students could recall behaviors that increase risk of sexually transmitted diseases, regardless of the number of risk behaviors they remembered having actually engaged in, the more at risk they felt. Similarly, participants asked to recall three behaviors that increase the risk of heart disease perceived themselves as being at greater risk than did participants asked to recall eight such behaviors (Rothman & Schwarz, 1998). But there is an important caveat here, as well as reason for optimism. When Rothman and Schwarz (1998) asked participants who had a family history of heart disease, and thus for whom the condition was quite relevant, to think of personal behaviors that increase risk, they estimated themselves to be at higher risk when asked to come up with more, rather than fewer, risky behaviors. This suggests that with greater personal relevance, people can be more discriminating in how they evaluate information, and judgments are less subject to the ease of retrieval effect. This is especially important to keep in mind with our increasing exposure to misinformation about our risk for disease, much of it spread by automated bot Internet accounts that are often indistinguishable from real people (Broniatowski et al., 2018). The spread of erroneous information about what causes cancer, or why vaccines are harmful, and the ease with which we remember such information can have dire consequences. Indeed, the more people are exposed to vaccine misinformation through social media, the less likely they are to get vaccinated (Jolley & Douglas, 2014). This has become such a concern that the World Health Organization has identified "vaccine hesitancy" as a top 10 threat to global health (Yang et al., 2019).

Impression Formation

If you have ever had a roommate, think back to when you first met the person. You were probably thinking, "Who is this person? What will he or she be like as a roommate?" Once we have determined a person's basic characteristics, such as gender and physical attributes, the next step is to form an impression of that person's personality. What are the person's traits, preferences, and beliefs? Knowing what people are like is useful because it guides how you expect them to act. The more accurate you are, the more you can adapt your behavior appropriately. If you think particular people are likely to be friendly, you might be more likely to smile at them and act friendly yourself, whereas if you think some individuals are likely to be hostile, you might avoid them altogether. This interpersonal accuracy and behavioral adaptability, in turn, often lead to more satisfying relationships (Schmid Mast & Hall, 2018). There are two general ways we form impressions of others: from the bottom up and from the top down.

thinking about people and events

Imagine that classes have ended. You are driving, and as you make a left turn, a red sports car in the oncoming lane comes straight toward you at high speed and hits the side of your car. With your mood ruined, you pop out of your car, relieved no one was hurt, but upset and confused. The other driver jumps out of his car and is adamant that you cut him off. You claim he was speeding and that it was his fault. The police are called to investigate what happened. This situation and the subsequent crime scene investigation would involve four essential ways people typically make sense of the world: We rely on our ability to recall events from the past (memory). We make inferences about what causes other people's behavior (causal attributions). We imagine alternatives to the events we experience (counterfactual thinking). We form impressions of other people, often on the basis of limited information (person perception). An investigation of the fender bender would involve retrieving memories for what happened, making determinations of what or who caused the accident, forming an impression of those involved in the accident, and considering how things might have happened differently. We use these same cognitive processes every day to make sense of the world around us. And, unlike a police detective, our cognitive system often engages these processes automatically and without any taxpayer expense!

SOCIAL PSYCH OUT IN THE WORLD

Imagine that it is time for U.S. citizens to choose another president. Zach is at home on election night, watching as news channels tally up the votes from each state. He hopes and wishes and prays for his preferred candidate to emerge the victor. A couple hours later, Zach's preferred candidate is declared the president-elect. Zach believes that his hoping and wishing and praying played a role in determining the election outcome. This is an example of magical thinking—believing that simply having thoughts about an event before it occurs can influence that event. Magical thinking is a type of attribution. It's a way of explaining what caused an event to happen. But it's a special type of attribution because it goes beyond our modern, scientific understanding of causation. Returning to our example, we know that the election outcome is determined by myriad factors that are external to Zach and not by his wishes transmitting signals to election headquarters. And yet, however unrealistic or irrational magical thinking is, many of us have an undeniable intuition that we can influence outcomes with just our minds. To see this for yourself, try saying out loud that you wish someone close to you contracts a life-threatening disease. Many people find this exercise uncomfortable because some intuitive part of them feels that simply saying it can make it happen. Why is magical thinking so common? Freud (1913/1950) proposed that small children develop this belief because their thoughts often do in fact seem to produce what they want. If a young child is hungry, she will think of food and perhaps cry. Behold! The parent will often provide that food. Although as we mature we become more rational about causation, vestiges remain of this early sense that our thoughts affect outcomes. As a result, even adults often falsely believe they control aspects of the environment and the world external to the self. Ellen Langer's classic research on the illusion of control demonstrated that people have an inflated sense of their ability to control random or chance outcomes, such as lotteries and guessing games (Langer, 1975). For example, participants given the opportunity to choose a lottery ticket believed they had a greater chance of winning than participants assigned a lottery ticket. Research by Emily Pronin and her colleagues (2006) has pushed this phenomenon even further. In one study, people induced to think encouraging thoughts about a peer's performance in a basketball shooting task ("You can do it, Justin!") felt a degree of responsibility for that peer's success. In another study, people asked to harbor evil thoughts about and place a hex on someone believed they actually caused harm when that person later complained of a splitting headache. Magical thinking is not only something that individuals do. We see the same type of causal reasoning in popular cultural practices. In religious rituals such as prayer or the observance of a taboo, or when people convene to pray for someone's benefit or downfall, there is a belief that thoughts by themselves can bring about physical changes in the world. Keep in mind, though, that a firm distinction between these forms of magical thinking and rational scientific ideas about causation may be peculiar to Western cultural contexts. Some cultures, such as the Aguaruna of Peru, see magic as merely a type of technology, no more supernatural than the use of physical tools (Brown, 1986; Horton, 1967). But people in Western cultures are not really all that different. Many highly popular self-help books and videos such as The Secret (see www.thesecret.tv) extol the power of positive thinking, and many Americans believe at least in some superstitions. Although magical thinking stems partly from our inflated sense of control, it can also be a motivated phenomenon—something that people desire to be true. People may feel threatened by the awareness that they are limited in their ability to anticipate and control the hazards lurking in their environment. They realize that their well-being—and even their being at all—is subject to random, uncaring forces beyond their control. To avoid this distressing awareness, people apply magical thinking to restore a sense of control. Supporting this account, research has shown that superstitious behavior is displayed more often in high-stress situations, especially by people with a high desire for control (Keinan, 1994, 2002). This is a good example of a broader point we discuss in the main text: People's motivation to cling to specific beliefs about the world and themselves can bias how they perceive and explain the world around them.

Fixed and Incremental Mind-Sets

Inspired by her early work on achievement, Dweck and colleagues (Dweck, 2012; Hong et al., 1995) proposed that intelligence and other attributes need not be viewed with this fixed, or entity, mind-set —that is, as stable traits that a person can't control or change. Rather, they could be viewed as attributes that change incrementally over time. When we take an incremental mind-set, we believe an attribute is a malleable ability that can increase or decrease. Often these mind-sets concern a person's ability to grow or improve. For example, we may see an attribute such as shyness as a quality that, with the right motivation and effort, people can change so they become less shy (Beer, 2002). These mind-sets have implications for how an individual interacts with, and responds to, the social world. Children and adults with fixed mind-sets make more negative stable attributions about themselves in response to challenging tasks and then tend to perform worse and experience more negative affect in response to such tasks. Moreover, they tend to eschew opportunities to change such abilities, even when the abilities are crucial to their success. For instance, exchange students who had stronger fixed mind-sets about intelligence expressed less interest in remedial English courses when their English was poor, even though improving would facilitate their academic goals (Hong et al., 1999). In contrast, those with incremental mind-sets viewed situations that challenged their abilities as opportunities to improve, to develop their skills and knowledge. Our mind-set affects not just what we do for ourselves but how we treat others. Although holding an incremental mind-set can lead one to blame others for their chronic shortcomings (Ryazanov & Christenfeld, 2018), it also makes one more likely to help others improve, as when business managers with stronger incremental mind-sets are more willing to mentor their employees (Heslin & Vandewalle, 2008). Although people generally have dispositional tendencies to hold either fixed or incremental mind-sets about human attributes, these views can be changed (Dweck, 2012; Heslin & Vandewalle, 2008; Kray & Haselhuhn, 2007). For example, convincing students that intelligence is an incremental attribute rather than a stable entity encourages them to be more persistent in response to failure, adopt more learning-oriented goals, and make fewer ability attributions for failure (Burnette et al., 2012; Paunesku et al., 2015). Although there is some debate about the strength of these interventions, with some meta-analyses indicating weak effects (Moreau et al., 2019; Sisk et al., 2018), a recent study of more than 12,000 adolescents found that a 50-minute training in growth mind-sets improved grades among lower-achieving students (Yeager et al., 2019). Given the importance of these mind-sets for academic achievement and other domains, you might wonder where these different mind-sets come from. Most likely people learn them through the socialization process (see chapter 2), including interactions with their parents. But we don't just adopt whatever mind-set—about intelligence, for example—our parents hold. Haimovitz and Dweck (2016) have argued that mind-sets about traits such as intelligence are often not clearly conveyed to children. Rather, Haimovitz and Dweck have found that it is the parents' mind-set about failure that exerts a potent influence on the intelligence mind-set their children develop. As shown in FIGURE 4.3, when parents view failure as a statement about their children's ability, the children perceive their parents as focusing on performance as opposed to learning, and this leads the children to develop a mind-set of intelligence as stable. But when parents view failure as an opportunity to learn and grow, the children perceive the parents as focusing on learning as opposed to performance, and this leads the children to develop a mind-set of intelligence as malleable.

Forming Impressions of People

Let's return again to the scene of the accident with which we began this chapter. As the investigator, you show up and see the two drivers engaged in animated debate about who was at fault. Very quickly you'll begin to form impressions of each of them. What are the factors that influence these impressions? In this section, we examine some of the processes that guide our impressions.

Changing First Impressions

No doubt you have heard the old adage that you have only one chance to make a first impression. The research reviewed so far suggests that we do form impressions of others quite quickly. As we've already learned, once we form a schema, it becomes very resistant to change and tends to lead us to assimilate new information into what we already believe. What we learn early on seems to color how we judge subsequent information. This primacy effect was first studied by Asch (1946) in another of his simple but elegant experiments on how people form impressions (see also Sullivan, 2019). In this study, Asch gave participants information about a person named John. In one condition, John was described as "intelligent, industrious, impulsive, critical, stubborn, and envious." In a second condition, he was described as "envious, stubborn, critical, impulsive, industrious, and intelligent." Even though participants in the two conditions were given exactly the same traits to read, the order of those traits had an effect on their global evaluations of John (TABLE 4.2). They rated him more positively if they were given the first order, presumably because the opening trait, "intelligent," led people to put a more positive spin on all of the traits that followed it. When it comes to making a good impression, you really do want to put your best foot forward. But if we form such quick judgments of people, what happens if we later encounter information that disconfirms those initial impressions? If the disconfirmation is strong enough, our initial impressions can be changed. In fact, our initial evaluations of someone can even change quite rapidly when we are presented with new information about a person (Ferguson et al., 2019; Olcaysoy Okten et al., 2019) as well as if we are prompted to reconsider initially available information (Brannon & Gawronski, 2017). Our schemas exert a strong pressure on the impressions we develop, but they are not so rigid as to be unchangeable. When people do things that are unexpected, our brain signals that something unusual and potentially important has just happened (Bartholow et al., 2001). A broad network of brain areas appears to be involved in this signaling process, spurred by the release of the neurotransmitter norepinephrine in the locus coeruleus (Nieuwenhuis et al., 2005). Because of this increased processing of information, we become more likely to modify our initial impression. However, because of the negativity bias, this is especially true when someone we expect good things from does something bad. In sum, our processes of remembering events and people, drawing causal attributions for events, generating counterfactuals, and forming impressions have important implications for the way we feel toward the past and act in the future. These processes can help or hinder our efforts to regulate our actions to achieve our desired goals, a theme we will pick up in the next set of chapters, where we focus on the self.

Theory of Mind

Not only can we learn about a person's personality on the basis of minimal information, but we are also pretty good at reading peoples' minds. No, we are not telepathic. But we do have an evolved propensity to develop a theory of mind: a set of ideas about other people's thoughts, desires, feelings, and intentions, based on what we know about them and the situation they are in (Malle & Hodges, 2005). This capacity is highly valuable for understanding and predicting how people will behave, which helps us cooperate effectively with some people, compete against rivals, and avoid those who might want to do us harm For example, we use facial expressions, tone of voice, and how the head is tilted to make inferences about someone and how the person is feeling (Witkower & Tracy, 2019). If your new roommate comes home with knit eyebrows, pursed lips, clenched fists, and a low and tight voice, you might reasonably assume that your roommate is angry. Most children develop a theory of mind around the age of four (Cadinu & Kiesner, 2000). We know this because children around this age figure out that their own beliefs and desires are separate from others' beliefs and desires. However, people with certain disorders, such as autism, never develop the ability to judge accurately what others might be thinking (Baron-Cohen et al., 2000). Even those with high-functioning forms of autism, who live quite independently and have successful careers, show an impaired ability to determine a person's emotions on the basis of the expression in their eyes or the tone of their voice (Baron-Cohen et al., 2001; Rutherford et al., 2002). As a result, these individuals often have difficulty with social tasks most of us take for granted, such as monitoring social cues indicating that our own behavior might offend or step outside behavioral norms. To uncover the brain regions involved in this kind of mind reading, neuroscientists have focused on a brain region called the medial prefrontal cortex (Frith & Frith, 1999). This region contains mirror neurons, which respond very similarly both when one does an action oneself and when one observes another person perform that same action (Uddin et al., 2007). When seeing your roommate's expressions of anger as described above, you might find yourself imitating those facial displays. But even if your own facial expression doesn't change, mirror neurons in your brain allow you to simulate your roommate's emotional state, helping you to understand what your roommate is thinking and feeling (Iacoboni, 2009).

Inferring Cause and Effect in the Social World

Now that we have a sense of the role of memory in how we understand people and events, we can move on to a second core process we use to gain understanding. This is the process by which we look for relationships of cause and effect. In our fender-bender example, the investigator involved is charged not only with obtaining a (let's hope accurate) record of what happened but also with identifying what factors caused the outcome. In our own interpretation of events, we similarly seek to pair effects with their causes in order to better predict and improve outcomes in the future. When you see that your friend is really upset, you want to know what caused those feelings so that you can effectively console your friend, maybe help fix the problem, and know how to avoid the situation in the future. Fritz Heider pioneered this line of inquiry beginning in the 1930s. At that time, the two dominant views of what causes humans to behave the way we do—psychoanalysis and behaviorism—placed little emphasis on our conscious thoughts. But Heider argued that to understand why people behave the way they do, we have to examine how they come to comprehend the people around them. To this end, Heider (1958) developed a common sense, or naive, psychology: an analysis of how ordinary people like you and me think about the people and events in our lives.

Stereotypes and Individuation

Of course, we don't always rely on stereotypes to judge others. As we get to know a person better, we come to view him or her more as an individual than as a member of a stereotyped group (Kunda et al., 2002). When do people rely on the top-down process (applying a stereotype, or schema for someone's group, to see that person solely as a member of that group), and when do they use the bottom-up process (perceiving the person as a unique individual)? We are more likely to use a bottom-up approach and perceive a person as an individual unique from social groups when we are motivated to get to know and understand who that person is (Fiske & Neuberg, 1990). Such motivation is often activated when we need to work together with the person on a project (Neuberg & Fiske, 1987) or when we are made to feel similar to them in some way (Galinsky & Moskowitz, 2000). When this happens, rather than lazily rely on stereotypes, we attend closely to the person's specific words and actions and form individualized impressions of the unique individual with whom we are interacting (Kunda et al., 2002).

How Are Memories Formed?

On rare occasions—such as when studying for a test—we actively try to store information in memory, but most of the time, laying down memories happens automatically, with little effort on our part. How does this process of memory formation happen? First, we can make a distinction between short-term memory (information that is currently activated) and long-term memory (information from past experience that may or may not be currently activated). At every moment, you are attending to some amount of sensory stimulation in your environment, and some of that information will be encoded, or represented, in short-term memory. Information that is actively rehearsed or is otherwise distinctive, goal relevant, or emotionally salient gets consolidated, or stored, in long-term memory for later retrieval. When you are at a party, embarrassed that you cannot remember the name of your roommate's date, you can take some comfort in knowing that the process of remembering can break down for many reasons. Perhaps you were distracted when the name was mentioned and lacked the attention needed to encode the information. Even if you were paying attention, you might not have been motivated to consolidate the name into long-term memory; maybe you thought you wouldn't meet the person again. Or maybe you intended to repeat the name to yourself a few times to remember it but forgot to do so (lack of rehearsal). People, it turns out, are often overly optimistic about how much they will reflect on events they experience (Tully & Meyvis, 2017). Finally, you could have the name stored in memory but may be temporarily experiencing an inability to retrieve it because of distractions at the party.

Causal Hypothesis Testing

Putting conscious effort into making an attribution is like testing a hypothesis (Kruglanski, 1980; Pyszczynski & Greenberg, 1987b; Trope & Liberman, 1996). First, we generate a possible causal hypothesis (a possible explanation for the cause of the event). This could be the interpretation we'd prefer to make, the one we fear the most, or one based on a causal schema. A large paw print seems more likely to be caused by a large animal than a small one (Duncker, 1945). The causal hypothesis could also be based on a salient aspect of the event or a factor that is easily accessible from memory. Finally, causal accounts can be based on close temporal and spatial proximity of a factor to the event, particularly if that co-occurrence happens repeatedly (Einhorn & Hogarth, 1986; Michotte, 1963). If Frank arrives late to a party and, soon after, a fight breaks out, it is likely that you'll entertain the hypothesis that Frank caused the melee, especially if the co-occurrence of Frank's arrival and fights happens repeatedly. The tendency of the co-occurrence of a potential causal factor (Frank's arrival) and an outcome (a fight) to lead to a causal hypothesis (Frank causes fights) is called the covariation principle (Kelley, 1973). Once we have an initial causal hypothesis, we gather information to assess its plausibility. How much effort we give to assessing the validity of that information depends on our need for closure versus our need for accuracy. If the need for closure is high and the need for accuracy is low, and if the initial information seems sound and fits the hypothesis, we may discount other possibilities. But if the need for accuracy is higher than the need for closure, we may carefully consider competing causal attributions and then decide on the one that seems to best fit the information at our disposal. What kinds of information would we be most likely to use to inform our causal attributions?

The Fundamental Attribution Error

Reliance on visual salience was anticipated by Heider (1958), who proposed that people are likely to attribute behavior to internal qualities or motives of the person because when a person engages in an action, that actor tends to be the observer's salient focus of attention. Edward Jones and Keith Davis (1965) carried this notion further, proposing that when people observe an action, they have a strong tendency to make a correspondent inference, meaning that they attribute to the person an attitude, a desire, or a trait that corresponds to the action. For example, if you watch Ciara pick up books dropped by a fellow student leaving the library, you will automatically think of Ciara as helpful. Correspondent inferences are generally useful because they give us quick information about the person we are observing, in terms of either their dispositions or intentions (Moskowitz & Olcaysoy Okten, 2016). Correspondent inferences are most likely under three conditions (e.g., Jones, 1990): The individual seems to have a choice in taking an action. A person has a choice between two courses of action, and there is only one difference between one choice and the other. For example, if Sarah must choose between two similar colleges except one is known to be more of a party school, and she chooses the party school, you may conclude that Sarah is into partying. But you would be less likely to do so if the school she chose was more of a party school but also closer to her home and less expensive. Someone acts inconsistently with a particular social role. If a contestant in a game show wins a car but barely cracks a smile and simply says "Thank you," you would be likely to infer that she is not an emotionally expressive person. But if she jumps up and down excitedly after she wins the new car, you would not be as certain what she is like because most people in that role would be similarly exuberant. Although all three of these factors increase the likelihood of a correspondent inference, this tendency is so strong that we often jump to correspondent inferences without sufficiently considering external situational factors that may also have contributed to the behavior witnessed (e.g., Jones & Harris, 1967). People's tendency to draw correspondent inferences, attributing behavior to internal qualities of the actor and, consequently, underestimating the causal role of situational factors, is so pervasive that it is known as the fundamental attribution error (FAE). The initial demonstration of the FAE was provided by Ned Jones and Victor Harris (1967) (FIGURE 4.4). Participants read an essay that was either strongly in favor of Fidel Castro (the longtime dictator of Cuba) or strongly against Castro. Half the participants were told that the essay writer chose his position on the essay. When asked what they thought the essay writer's true attitude toward Castro was, participants, not surprisingly, judged the true attitude as pro-Castro when the essay was pro-Castro and anti-Castro when the essay was anti-Castro. However, the other half of the participants were told that the writer didn't have a choice in whether to advocate for or against Castro; instead, the experimenter had assigned what side the writer should take. Logic would suggest that the lack of choice would make the position advocated by the essay a poor basis for guessing the author's true attitude. However, these participants, despite knowing the essay writer had no choice, also rated his attitudes as corresponding to the position he took in the essay. Many experiments have since similarly shown that despite good reasons for attributing behavior largely, if not entirely, to situational factors, people tend to make internal attributions instead. For example, in a study using a quiz show type format, people thought that those asking trivia questions were more knowledgeable than those answering them (Ross, Amabile, & Steinmetz, 1977), even though it is pretty obvious that coming up with tough questions from your own store of knowledge is a lot easier than answering difficult questions made up by someone else. But the participants did not sufficiently take into account the influence of the situation—the questioner and contestant roles—in making these judgments. To appreciate the FAE, consider how we often think that actors are like their characters. Actors who play evil characters on soap operas have even been verbally abused in public! Similarly, slapstick comics such as Will Ferrell have expressed frustration that people expect them to be wacky loons in real life. On the one hand, these errors make sense because we know these people only as their fictional characters. On the other hand, they are great examples of the FAE because we know that most of the time actors are saying lines written for them by someone else and are being directed with regard to their appearance, movements, and nonverbal behaviors.

How Do We Remember?

Retrieving information from long-term memory seems like it should be a fairly objective process. We experienced some event and then try to retrieve that event from our mind's storage chest of information. However, like our perceptions and encoding of the social world, the process of retrieval is colored by many of the factors discussed in chapter 3: our biases, our schemas, our motives, our goals, and our emotions (Talmi et al., 2019). As the psychologist John Kihlstrom (1994, p. 341) put it, "Memory is not so much like reading a book as it is like writing one from fragmentary notes." When we seek to remember an event, we often have to build that memory from the recollections available to us. We reconstruct an idea of what happened by bringing to mind bits of evidence in much the same way an investigator might interview different sources and pull together several pieces of evidence to create a coherent picture of what happened. In trying to gather this information, we may intend to seek accurate knowledge. After all, who doesn't want to be accurate? But the problem is that motives to reach conclusions that fit with what we expect or what we desire often crash the party. Indeed, among the more potent tools that we use to reconstruct our memories are our schemas. As you might expect from our discussion of schemas in chapter 3, when we try to recall information about an event, our schemas guide what comes to mind.

Elaborate Attributional Processes

So far we have focused on the relatively quick, automatic ways people arrive at causal attributions: relying on salience, jumping to a correspondent inference, and perhaps making a correction. But when people are sufficiently surprised or care enough, they put effort into gathering information and thinking carefully before making a causal attribution. Imagine a young woman eager about a first date with a young man she met in class and really liked. They were supposed to meet at a restaurant at 7:00 p.m., but he's not there at 7:00, or 7:05, or 7:15. She would undoubtedly begin considering causal explanations for this unexpected, unwelcome event: Did he change his mind? Was he in a car accident? Is he always late?

Social Media and Memory

Social media is an ever-present strand in the fabric of social life, as evidenced by the 350 million photos uploaded to Facebook per day (Omnicore, 2020) and 8,800 tweets per second (internetlivestats.com, 2020). You've likely noticed that at just about any event, whether a visit to a tourist attraction or a get together with friends, most people seem to be tweeting, snapchatting, or posting. Have you thought about how our pervasive use of social media to preserve and share our experiences might come at a cost of actually encoding those experiences and laying down retrievable tracks in memory? Tamir and colleagues (2018) considered this very question. They had participants in one study watch a TED talk (an educational video lecture). In another study, they recruited participants visiting a museum. In both studies, they had some participants use social media to share their experience and told others not to use social media. Both immediately after the experience and a week later, the researchers assessed participants' memories for the events. Those who had used social media during their experiences showed poorer memory for the event than those who did not. Similar memory impairments have been found to occur if people take pictures of the events they experience (Barasch et al., 2017). Social media is certainly an important and valuable aspect of our world, but dividing your attention between your phone and an experience can impair your ability to recall that experience.

Does the FAE Occur Across Cultures?

Some social psychologists have proposed that the FAE is a product of individualistic cultural worldviews, which emphasize personality traits and view individuals as being responsible for their own actions (Watson, 1982; Weary et al., 1980). In fact, the original evidence of the FAE was gathered in the United States and other relatively individualistic cultures. And it does seem clear, as we noted in chapter 2, that people in more collectivistic cultures are more attentive to the situational context in which behavior occurs and to view people as part of the larger groups to which they belong. Indeed, when directed to explain behavior, people from more collectivistic cultures (e.g., China) generally give more relative weight to external, situational factors than do people from more individualistic cultures (Choi & Nisbett, 1998; Miller, 1984; Morris & Peng, 1994). Such research suggests that socialization in a collectivistic culture might sensitize people to contextual explanations of behavior. Evidence of cross-cultural variation in attributional biases doesn't necessarily mean that individuals raised in collectivistic settings don't form impressions of people's personalities by observing their behavior. Douglas Krull and colleagues (1999) found that when participants are asked to judge another person's attitudes or traits on the basis of an observed behavior, people from collectivistic cultures such as China are just as susceptible to the FAE as people from more individualistic cultures such as the United States. Around the globe, when the goal is to judge a person, we judge people by their behavior. But when the goal is to judge the cause of a behavior, there is cultural variation in how much the behavior is attributed to the person or the situation.

Judging Others

The FAE has implications for how people judge others—for example, defendants in court—and for how people judge social issues pertinent to individuals and groups. If people are likely to make internal attributions to drug addicts, homeless people, and welfare recipients, they are probably more supportive of treating these groups harshly and less likely to entertain ways to change environmental factors that contribute to these problems. Linda Skitka and colleagues (2002) found that, consistent with this reasoning, American political conservatives seem to be more susceptible to the FAE than liberals and are therefore less sympathetic to those low on the socioeconomic ladder. Likewise, conservatives judging wealthy people are led by the FAE toward attributions to those people's abilities and initiative rather than to their trust funds, connections, or lucky breaks.

Finger-Pointing

The actor-observer effect has clear implications for interpersonal and intergroup relations. When things are going badly in a relationship, whether it is a friendship, a marriage, or an alliance between two countries, each actor is likely to view the external situation that is salient as being responsible for the problems. This means the friend, the marriage partner, or the other country is seen as the cause of the problem. And this, of course, can create and intensify finger-pointing and hostility between individuals and groups. Storms's (1973) work suggests that one way to combat this kind of attributional finger-pointing is to make the other party's perspective on the issues salient. This might defuse tensions by helping the parties see how they themselves are contributing to the problem

Eyewitness Testimony, Confessions, and False Memories

The misinformation effect suggests ways in which real-life investigations can be biased. Eyewitness testimony and confessions are the most influential forms of evidence in trials, and thousands of cases have been decided based on such evidence. However, the misinformation effect suggests that leading questions by police investigators and exposure to information after the event in question are capable of influencing witnesses to remember events in ways, or with a degree of confidence, that may not be accurate. A well-publicized example is the case of Ronald Cotton, falsely convicted in 1984 of burglary and rape based on the eyewitness testimony of the victim, Jennifer Thompson-Cannino. After initially studying a photographic lineup for some time, Thompson-Cannino tentatively identified Cotton, and the detective congratulated her on doing a good job. This affirmation of her tentative identification operated as a form of misinformation that strengthened her false belief. Subsequent and repeated images of the assailant during the course of the investigation and trial reinforced her memory such that her confidence increased over time. After serving 11 years of a life sentence, Cotton was exonerated based on DNA evidence (and the real perpetrator identified). Interestingly, Cotton and Thompson-Cannino have since worked to reform eyewitness identification procedures and cowritten a best-selling book, Picking Cotton (Weir, 2016) Not only can witnesses be led astray, but so too can confessors. Investigators in one study interviewed college-aged participants on three occasions (Shaw & Porter, 2015). They mentioned in the first interview a crime the participant had committed as an adolescent. Although the participant had not actually committed the crime, the investigators claimed that the information had come from the participant's parents. To enhance believability, the investigators wove in some details the participant had actually experienced as a youth. At the first interview, none of the participants recalled such an event having occurred, but by the third interview, 70% of participants falsely remembered having committed a theft or an assault at about age 12. Tempted to think such false confessions rarely occur in actual legal investigations? Think again. About 25% of exonerations (instances in which a convicted individual is later declared not guilty) based on DNA evidence involved false confessions, and, not surprisingly, such false confessions also taint the way other evidence is evaluated (Kassin et al., 2012) Understanding how false memories are created can help us make sense of controversies about repressed memories of childhood sexual abuse (Otgaar et al., 2019). Some researchers suggest that even such dramatic memories can be falsely reconstructed. This may occur when a person is especially susceptible to suggestion, is suffering from psychological difficulties, and is motivated to understand and get past those problems. In the course of therapy, the seeds of such memories can be planted by leading questions from the therapist (Kunda, 1999; Schacter, 1996). Although it is not clear how often false memories of abuse occur, real memories of such horrific experiences may be initially dismissed because they don't fit the schema most of us have of how adults—whether coaches, priests, or parents—treat children. This phenomenon was portrayed in the award-winning 2015 film Spotlight (McCarthy, 2015), which recounts how Boston Globe journalists eventually brought to light child sexual abuse that numerous Catholic priests in the Boston area had perpetrated over an extended period of time.

Stereotyping

The process just described involves applying a schema we have of a type of people, "attractive people," to judge an individual. This is what we do when we stereotype others. Stereotyping is a cognitive shortcut or heuristic, a quick and easy way to get an idea of what a person might be like. Stereotyping is an application of schematic processing. Forming a completely accurate and individualized impression of a person (i.e., one that is unbiased by stereotypes) is an effortful process. We often fall back on mental shortcuts when the stakes are low ("Does it really matter if I falsely assume that Tom is an engineer?") or we aren't especially motivated to be accurate. But even when the stakes are high and our judgments matter, we can still be biased by our stereotypes when we are tired or fatigued. For example, when participants are asked to judge the guilt or innocence of a defendant on the basis of ambiguous evidence, their decisions are more likely to be biased in stereotypical ways when they are in the off-cycle of their circadian rhythm—such as at 8:00 a.m. when they are normally at their cognitive peak in the evening (Bodenhausen, 1990). In such situations, our tendency toward stereotyping may be cognitively functional, but it can clearly have very damaging social costs. In this way, general stereotypes about a group of people are often employed to form an impression of individual members of that group. In addition, sometimes we take a bit of information we might know about a person and erroneously assume that the person is part of a larger category merely because he or she seems to map onto our schema of that category. For example, imagine that you've been given the following description of a person chosen from a pool of 100 professionals: Jack is a 45-year-old man. He is married and has four children. He is generally conservative, careful, and ambitious. He shows no interest in political and social issues and spends most of his free time on his many hobbies, which include home carpentry, sailing, and mathematical puzzles. If this is all you knew about Jack, do you think it's more likely that Jack is an engineer or a lawyer? If you are like most of the participants who were faced with this judgment in a study carried out in 1973, you'd stake your bet on engineer. But what if you were also told that of the 100 professionals, 70 are lawyers and 30 are engineers? Would that make you less likely to guess engineer? According to research by Amos Tversky and the Nobel Prize winner Daniel Kahneman (Kahneman & Tversky, 1973), the answer is "no." As long as the description seems more representative of an engineer than a lawyer, participants guess engineer, regardless of whether the person was picked out of a pool of 70% lawyers or 70% engineers! Such erroneous judgment occurs because people fall prey to the representativeness heuristic, a tendency to overestimate the likelihood that a target is part of a larger category if the person has features that seem representative of that category. In this case, "lacking interest in political issues and enjoying mathematical puzzles" seems more representative of an engineer than of a lawyer. But this conclusion depends heavily on the validity of these stereotypes and involves ignoring statistical evidence regarding the relative frequency of particular events or types of people. Even when the statistical evidence showed that far more people in the pool were lawyers than engineers (that is, a 70% base rate of lawyers), the pull of the heuristic was sufficiently powerful to override this information.

Is It Better to Generate Upward or Downward Counterfactuals?

The work on counterfactual thinking illustrates how the human capacity for imagining "if only" alternatives to past events plays a central role in our emotional reactions to those events. Now, in light of all this, you might be wondering, is it better to generate upward or downward counterfactuals? That depends on a few factors. If you're down in the dumps and just want to feel better about what happened, downward counterfactuals and imagining a worse outcome can improve how you feel. But that's not always the most productive response. Sometimes we can learn a lot from thoughts that make us feel worse. Indeed, if the outcome pertains to an event that is likely to reoccur in the future, upward counterfactuals can give you a game plan for improvement or avoiding the bad outcome. But it also depends on whether you're able to exert any control over the outcome that you experienced or that you might face in the future (Roese, 1994). Say that you get into a car accident, because—shame on you—you were texting while driving. In this case, assuming that you're reasonably okay, it would be more productive to generate an upward counterfactual, such as "If only I had not been texting, I would not have hit the stop sign." This teaches you to avoid texting while driving, and since you're likely to be driving again, this is a good lesson to learn! But say you were attentively driving when someone ran a stop sign and nailed your rear fender. In this case, the outcome is out of your control, and so you're better off generating a downward counterfactual and thanking your lucky stars that it was just a fender bender, and you're all right.

Remembering Things Past

Through our senses, we take in information from the world around us. But we don't just perceive and then act. We are also equipped with the important ability to lay down traces of memory that allow us to build a record of events, people, and objects we have encountered in the past. This record of memory helps us make sense of the present and also allows us to learn from past experience so we can do better in the future. In our everyday understanding, we think about memories as the past record of personal experiences we have had (that time I almost caught on fire when I sprayed an aerosol can of cooking spray at a lit barbecue grill). But memory is really much more than that; in fact, there are different types of memory, and a sequence of processes underlie how memories get formed and are recalled. Let's consider how memories are formed so we can then understand how they are influenced by various social factors.

Brief Encounters: Impressions from Thin Slices

We are surprisingly accurate at forming impressions of individuals by observing what they say and do. We can even decode certain personality characteristics based on very thin slices, such as a 30-second video clip, of a person's behavior (Ambady & Rosenthal, 1992). Having more time to observe the person increases accuracy, but not by much (Murphy et al., 2019). Indeed, in one study, the impressions participants formed of a person based on a photograph were remarkably similar to their impressions a month later, after they had met the person (Gunaydin et al., 2017). When it comes to forming impressions, it seems that people ignore the old adage "Don't judge a book by its cover." Part of the reason may be that such judging works pretty well. In fact, people can accurately perceive the personality traits of others without ever seeing or meeting them, having only evidence of everyday behaviors such as what they post on Facebook (Vazire & Gosling, 2004). Some personality traits, such as being socially extraverted, can be predicted by knowing that a person has a preference for music with vocals (Rentfrow & Gosling, 2006). Other traits, like conscientiousness and openness to new experiences, can be accurately perceived merely by seeing people's office space or bedrooms (Gosling et al., 2002). Check out the exercise in FIGURE 4.7 to get a sense of what we mean. Finally, as you might expect, some people are better at guessing personalities than others. People who are higher in empathetic understanding and have the ability to adopt another's perspective tend to make more accurate judgments about the personalities of others (Colman et al., 2017).

Building an Impression from the Bottom Up: Decoding the Behaviors and Minds of Others

We build an impression from the bottom up by gathering individual observations of a person to form an overall impression. As we listen and observe, we begin inferring the individual's traits, attitudes, intentions, and goals, largely through the attribution processes we've already described. As we move toward a general impression of the person, there is a negativity bias, a tendency to weigh instances of negative behavior more heavily than instances of positive behavior (Pratto & John, 1991). This occurs for two reasons. First, there's a likely adaptive tendency to being particularly sensitive to detecting the negative things in our environment (Ito et al., 1998). Second, most of the time, people follow norms of good behavior, so bad behavior is more attention grabbing and may seem to reveal a person's "true colors." As a consequence, we are particularly likely to remember when someone we thought was good does a bad thing, but we easily overlook when a person we're used to observing do bad things does something good (Bartholow, 2010).

Downward Counterfactuals

We often also generate downward counterfactuals, thoughts of alternatives that are worse than what actually happened. These counterfactuals don't help us prepare better for the future, but they help us feel better about the past (Roese, 1994). By making salient possible outcomes that would have been worse than what actually happened, downward counterfactuals allow us to feel better about what happened. They serve a consolation function. After a robbery, you might conclude that although the thieves took your television, at least they didn't get your laptop. When visiting a friend in the hospital who broke both her legs in a car accident, people might offer consoling comments such as, "You were lucky—you could have had spine damage and been paralyzed for life." It is worth considering how people use counterfactuals to reframe such bad events. While visiting Los Angeles once, a football player from one of our schools was shot in the leg by a random bullet. The bullet missed the bone, and the newspaper emphasized how lucky the player was, because if the bullet had hit the bone, it would have caused more serious, potentially permanent damage. That makes sense, but the player would have been even luckier if he hadn't been shot at all! So whether he was lucky or not depends on whether you focus on the upward counterfactual of not being shot at all or on the downward counterfactual of the bullet's shattering a bone. When people want to put a positive spin on an outcome, they choose the downward counterfactual. Retail stores often take advantage of how making a downward counterfactual salient can place actual outcomes in a more positive light. Imagine that you come upon a pair of athletic shoes with a sign indicating a price of $99. You may think, "That's not a bad price." But what if the sign indicated that the shoes were reduced 50%, from $199 to $99? Such a sign essentially makes salient a downward counterfactual: The shoes could have cost $199! Does this downward counterfactual make paying $99 for the shoes more enticing? You might want to keep this in mind next time you shop online.

Transference

We often automatically perceive certain basic characteristics about a person and then infer that the individual will share the features that we associate with similar people. Sometimes this quick inference might result from the person's reminding us of another person we already know. Let's suppose you meet someone for the first time at a party, and your first thought is that she bears some resemblance to your favorite cousin. Research by Susan Andersen and colleagues (Anderson et al., 1996) suggests that because of this perceived similarity, you will be more likely to assume that this new acquaintance's resemblance to your cousin lies not just in appearance but extends to her personality as well. This process was first identified by Freud (1912/1958) and labeled transference. Transference is a complex process in psychoanalytic theory, but in social psychological research, it has been more narrowly defined as forming an impression of and feelings for an unfamiliar person using the schema one has for a familiar person who resembles him or her in some way.

Motivational Bias in Attribution

What makes causal attributions so intriguing to social psychologists—and we hope to you as well—is that they are both important and ambiguous. They are important because they play such a large role in the judgments and decisions we make about other people and about ourselves. They are ambiguous because we can't really see or measure causality; our causal attributions are based on guesswork because we rarely if ever have direct evidence that proves what caused a given behavior. And if we did, it would probably tell us the behavior was caused by a complex interaction of internal and external factors. For instance, any performance on a test is likely to be determined by a combination of the individual's aptitude, physical and mental health, amount of studying, and other events in the person's life commanding attention, as well as the particular questions on the test, the amount of time allotted for each question, and the quality of instruction in the course. And yet we typically either jump on the most salient attribution or do a little thinking and information gathering and then pick a single attribution, or at most two contributing factors, and discount other possibilities (Kelley, 1967). As Heider (1958) observed, because causal attributions are often derived from complex and ambiguous circumstances, there is plenty of leeway for them to be influenced by motivations other than a desire for an accurate depiction of causality. Think about how we make attributions for mass killings, an unfortunately all too common event in today's world. Sometimes the intent is clear, but other times it is more ambiguous. What kind of attributions have you made in such ambiguous situations? Noor and colleagues (2019) found that motivations bias people's attributions. When Germans who were anti-immigration read about a violent Syrian refugee, they ascribed terrorist motives. But Germans who were pro-immigration attributed the behavior to mental instability. Our attributions are also biased by our preferred views of the way the world works—our desire to maintain specific beliefs. Recall that people generally prefer to believe the world is just, that good things happen to good people, and that bad things happen to bad people (Lerner, 1980). One way we preserve this belief is to view people as being responsible for the outcomes they get. When people are strongly motivated to believe the world is just, they are especially likely to blame people who have had bad things happen to them, such as those who have contracted STIs, rape victims, battered spouses, and poor people (e.g., Borgida & Brekke, 1985; Furnham & Gunter, 1984; Hafer & Bègue, 2005; Summers & Feldman, 1984). Moreover, because we value people who affirm our preferred way of viewing the world, we tend to like those who buy into the world being just and think they are more likely to be successful. When people proclaim that the world is unjust and has given them a raw deal, we assume they will be less successful even if they are otherwise just as competent as those endorsing just world beliefs (Alves et al., 2019). Understanding attributions enables us to appreciate their powerful role in how we interact with the world. Attributions lead us to like or blame others and even, at times, to blame ourselves. For example, messages advocating for women's empowerment can lead women to attribute responsibility for gender-based disadvantages to themselves (Kim et al., 2018). The concern here is that a message indicating that women have the power to overcome workplace gender inequalities can also lead them to perceive themselves as being responsible for creating the inequalities.

Beginning with the Basics: Perceiving Faces, Physical Attributes, and Group Membership

When we encounter another person, one of the first things we recognize is whether that person is someone we already know or a stranger. As we mentioned in chapter 2, a region in the temporal lobe of the brain called the fusiform face area helps us recognize the people we know (Kanwisher et al., 1997). We see the essential role of the fusiform face area from studies of people who suffer damage to this region of the brain. Such individuals suffer from prosopagnosia, the inability to recognize familiar faces, even though they are quite capable of identifying other familiar objects. It seems that the ability to recognize the people we know was so important to human evolution that our brains have a region specifically devoted to this task. Research is beginning to suggest that those with autism show less activation in this brain region, consistent with the notion that autism impairs social perception (Schultz, 2005). Our brains are also highly attuned to certain physical characteristics of other people (Kramer et al., 2017; Messick & Mackie, 1989). These include a person's age and sex and whether we are related to him or her (Lieberman et al., 2008). People may have evolved to detect these cues automatically because they are useful for efficiently distinguishing allies from enemies, potential mates, and close relatives (Kurzban et al., 2001). The idea is that to survive, our ancestors had to avoid dangerous conflicts with others and avoid infection from people who were carrying disease. Individuals who could quickly size up another person's age, sex, and other physical indicators of health, strength, and similarity most likely had better success surviving and reproducing than those who made these judgments less quickly or accurately. We can even detect that someone is sick based on a photo, and when we do, physiological reactions are triggered that cue the immune system to prepare to fend off potential disease (Schaller et al., 2010).

What If, If Only: Counterfactual Thinking

When we settle on a causal attribution for an event, we also often think about how changing that causal factor could have changed the event. For example, what if we decide that the accident we described at the outset of the chapter occurred because the guy in the red sports car was texting while driving? We might then think if only the person hadn't been looking at his phone, he would have seen you starting to make a left turn and would have slowed down and avoided causing all that damage. This process of imagining how some event could have turned out differently is referred to as counterfactual thinking (Roese & Epstude, 2017). Counterfactuals are deeply ingrained in how we react to events; they are associated with unique patterns of activation of regions of the brain (De Brigard & Parikh, 2019), and they often affect us without our conscious awareness. In fact, the research we are about to present demonstrates that counterfactual thoughts routinely influence how we judge and respond emotionally to events in our lives.

Common Sense Psychology

Working from a Gestalt perspective, Heider assumed the same kinds of rules that influence the organization of visual sensations also guide most people's impressions of other people and social situations. In one early study (Heider & Simmel, 1944), people watched a rather primitive animated film in which a disk, a small triangle, and a larger triangle moved in and out of a larger square with an opening. The participants were then asked to describe what they saw (FIGURE 4.2). People tended to depict the actions of the geometric objects in terms of causes, effects, and intentions, such as "The larger triangle chased the smaller triangle out of the room [the larger square]." This tendency, along with his observations of how people talked about their social lives in ordinary conversation, led Heider to propose that people organize their perceptions of action in the social world in terms of causes and effects. Specifically, people tend to explain events in terms of particular causes. Heider referred to such explanations as causal attributions. Because causal attributions help people make sense of, and find meaning in, their social worlds, they are of great importance. When an employee is late, whether the employer attributes that behavior to the person's laziness or to her tough circumstances can determine whether she is fired or not. If a woman shoots her abusive husband, a jury may have to decide if she did it because she feared for her life or because she wanted to collect on his life insurance. In any given election, your vote is influenced by which political party's policies you view as being responsible for the current state of the country. In nearly every domain of life, causal attributions play a significant role


Related study sets

(Ms.Swain) IB Biology- 2.9 Photosynthesis

View Set

Lesson 8 Right attitudes about myself

View Set

Math Textbook: 6.3 Vectors in the Plane

View Set

Civil War Causes and Effects of the Emancipation Proclamation

View Set

Chapter 57 Management of Patients with Burn Injury

View Set

F2018 COM 315 Writing for Digital Media Exam 1

View Set