Psychology 1010 exam 4 part 2

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Discuss Hull and Spence's theory that drives motivational behavior in an attempt to maintain homeostasis.

5te

Explain the correspondence bias and the actor-observer effect.

As sensible as this seems, research suggests that people don't always make attributions correctly. The correspondence bias is the tendency to make a dispositional attribution even when a person's behavior was caused by the situation (Gilbert & Malone, 1995; Jones & Harris, 1967; Ross, 1977). This bias is so common that it is sometimes called the fundamental attribution error. For example, volunteers in one experiment played a trivia game in which one participant acted as the "quizmaster" and made up a list of unusual questions, another participant acted as the "contestant" and tried to answer those questions, and a third participant acted as the "observer" and simply watched the game. The quizmasters tended to ask tricky questions based on their own idiosyncratic knowledge, and contestants were generally unable to answer them. After watching the game, the observers were asked to decide how knowledgeable the quizmaster and the contestant were. Although the quizmasters had asked good questions and the contestants had given bad answers, it should have been clear to the observers that all this asking and answering was a product of the roles they had been assigned to play and that the contestant would have asked equally good questions and the quizmaster would have given equally bad answers had their roles been reversed. And yet observers tended to rate the quizmaster as more knowledgeable than the contestant (Ross, Amabile, & Steinmetz, 1977) and were more likely to choose the quizmaster as their own partner in an upcoming game (Quattrone, 1982). Even when we know that a successful athlete had a home field advantage or that a successful entrepreneur had family connections, we tend to attribute their success to talent and tenacity. Although the correspondence bias is quite robust, it is more likely to occur under some circumstances than others (Choi, Nisbett, & Norenzayan, 1999; D'Agostino & Fincher-Kiefer, 1992; Fein, Hilton, & Miller, 1990). For example, we are more prone to correspondence bias when judging other people's behavior than when judging our own. The actor-observer effect is the tendency to make situational attributions for our own behaviors while making dispositional attributions for the identical behavior of others (Jones & Nisbett, 1972). When college students are asked to explain why they and their friends chose their majors, they tend to explain their own choices in terms of situations ("I chose economics because my parents told me I have to support myself as soon as I'm done with college") and their friends' choices in terms of dispositions ("Norma chose economics because she's materialistic") (Nisbett et al., 1973). The actor-observer effect occurs because people typically have more information about the situations that caused their own behavior than about the situations that caused other people's behavior. We can remember getting the please-major-in-something-practical lecture from our parents, but we weren't at Norma's house to see her get the same lecture. As observers, we are naturally focused on another person's behavior, but as actors, we are focused on the situations in which our behavior occurs. Indeed, when people are shown videotapes of their conversations that allow them to see themselves from their partner's point of view, they tend to make dispositional attributions for their own behavior and situational attributions for their partner's (Storms, 1973ok; Taylor & Fiske, 1975).

Discuss the problems of instinct theory as a primary conceptualization of motivation, noting objections raised by behaviorists.

The hedonic principle sets the stage for an understanding of motivation but leaves many questions unanswered. For example, if our primary motivation is to keep the needle on good, so to speak, then which things push the needle in that direction and which things push it away? And where do these things get the power to push our needle around, and exactly how do they do the pushing? The answers to such questions lie in two concepts that have played an unusually important role in the history of psychology: instincts and drives. When a newborn baby is given a drop of sugar water, it smiles, and when it is given a check for $10,000, it acts like it couldn't care less. By the time the baby goes to college, these responses pretty much reverse. It seems clear that nature endows us with certain motivations and that experience endows us with others. William James (1890) called the natural tendency to seek a particular goal an instinct, which he defined as "the faculty of acting in such a way as to produce certain ends, without foresight of the ends, and without previous education in the performance" (p. 383). According to James, nature hardwired penguins, parrots, puppies, and people to want certain things without training and to execute the behaviors that produce these things without thinking. By 1930, the concept of instinct had fallen out of fashion. Not only did it fail to explain anything, but it also flew in the face of American psychology's hot new trend: behaviorism. Behaviorists rejected the concept of instinct on two grounds. First, they believed that behavior should be explained by the external stimuli that evoke it and not by the hypothetical internal states on which it depends. John Watson (1913) had written that "the time seems to have come when psychology must discard all reference to consciousness" (p. 163), and behaviorists saw instincts as just the sort of unnecessary "internal talk" that Watson forbade. Second, behaviorists wanted nothing to do with the notion of inherited behavior because they believed that all complex behavior was learned. Because instincts were inherited tendencies that resided inside the organism, behaviorists considered them doubly repugnant. But within a few decades, some of Watson's younger followers began to realize that the strict prohibition against the mention of internal states made certain phenomena difficult to explain. For example, if all behavior is a response to an external stimulus, then why does a rat that is sitting still in its cage at 9:00 a.m. start wandering around and looking for food by noon? Nothing in the cage has changed, so why has the rat's behavior changed? What visible, measurable external stimulus is the wandering rat responding to? The obvious answer is that the rat is responding to something inside itself, which meant that Watson's young followers—the "new behaviorists" as they called themselves—were forced to look inside the rat to explain its wandering. How could they do that without talking about the "thoughts" and "feelings" that Watson had forbidden them to mention? They began by noting that bodies are a bit like thermostats. When thermostats detect that the room is too cold, they send signals that initiate corrective actions such as turning on a furnace. Similarly, when bodies detect that they are underfed, they send signals that initiate corrective actions such as eating. To survive, an organism needs to maintain precise levels of nutrition, warmth, and so on, and when these levels depart from an optimal point, the organism receives a signal to take corrective action. That signal is called a drive, which is an internal state caused by physiological needs. According to this view, it isn't food per se that organisms find rewarding; it is the reduction of the drive for food. Hunger is a drive, a drive is an internal state, and when organisms eat, they are attempting to change their internal state.

Discuss the relationship between arousal and extraversion, and relate these findings to underlying behavioral activation and behavioral inhibition systems.

Behavioral and physiological research generally supports Eysenck's view. When introverts and extraverts are presented with a range of intense stimuli, introverts respond more strongly, including salivating more when a drop of lemon juice is placed on their tongues and reacting more negatively to electric shocks or loud noises (Bartol & Costello, 1976; Stelmack, 1990). This reactivity has an impact on the ability to concentrate: Extraverts tend to perform well at tasks that are done in a noisy, arousing context, such as bartending or teaching, whereas introverts are better at tasks that require concentration in tranquil contexts, such as the work of a librarian or nighttime security guard (Geen, 1984; Lieberman & Rosenthal, 2001; Matthews & Gilliland, 1999). Refining Eysenck's ideas about arousability, Jeffrey Gray (1970) proposed that the dimensions of extraversion/introversion and neuroticism reflect two basic brain systems. The behavioral activation system (BAS), essentially a "go" system, activates approach behavior in response to the anticipation of reward. The extravert has a highly reactive BAS and will actively engage the environment, seeking social reinforcement and on the "go." The behavioral inhibition system (BIS), a "stop" system, inhibits behavior in response to stimuli signaling punishment. The emotionally unstable person, in turn, has a highly reactive BIS and will focus on negative outcomes and be on the lookout for "stop" signs. Studies of brain electrical activity (electroencephalograms—EEGs) and functional brain imaging (fMRI) suggest that individual differences in activation and inhibition arise through the operation of distinct brain systems underlying these tendencies (DeYoung & Gray, 2009).

Describe burnout, noting its causes and consequences

Did you ever take a class from an instructor who had lost interest in the job? Maybe the teacher looked distant and blank, almost robotic, giving predictable and humdrum lessons each day—as if it didn't matter whether anyone was listening. Now imagine being this instructor. You decided to teach because you wanted to shape young minds. You worked hard, and for a while things were great. But one day, you look up to see a roomful of miserable students who are bored and don't care about anything you have to say. They text-message while you talk and start putting papers away long before the end of class. You're happy at work only when you're not in class. When people feel this way, especially about their careers, they are suffering from burnout, a state of physical, emotional, and mental exhaustion created by long-term involvement in an emotionally demanding situation and accompanied by lowered performance and motivation. What causes burnout? One theory suggests that the culprit is using your job to give meaning to your life (Pines, 1993). If you define yourself only by your career and gauge your self-worth by success at work, you risk having nothing left when work fails. For example, a teacher in danger of burnout might do well to invest time in family, hobbies, or other self-expressions. Others argue that some emotionally stressful jobs lead to burnout no matter how they are approached and that active efforts to overcome the stress before burnout occurs are important. The stress management techniques discussed in the next section may be lifesavers for people in such jobs.

Discuss evidence for the facial feedback hypothesis, and describe how the causal pathway between emotional experiences and emotional expression can be bidirectional

Emotional experiences can cause emotional expressions. But interestingly, it also works the other way around. The facial feedback hypothesis (Adelmann & Zajonc, 1989; Izard, 1971; Tomkins, 1981) suggests that emotional expressions can cause the emotional experiences they signify. For instance, people feel happier when they are asked to make the sound of a long e or to hold a pencil in their teeth (both of which cause contraction of the zygomatic major) than when they are asked to make the sound of a long u or to hold a pencil in their lips (Strack, Martin, & Stepper, 1988; Zajonc, 1989) (see Figure 8.7). Most researchers think that smiles and happiness become strongly associated through experience, with one generally bringing about the other. These expression-causes-emotion effects are not limited to the face. For example, people who are asked to make a fist rate themselves as more assertive (Schubert & Koole, 2009) and people who are asked to extend their middle fingers rate others as more hostile (Chandler & Schwarz, 2009). (The odds seem pretty good that others would rate them as more hostile too).

Define anorexia nervosa and bulimia nervosa, and describe some biological and cultural causes of these eating disorders.

Feelings of hunger tell us when to eat and when to stop. But for the 10 to 30 million Americans who have eating disorders, eating is a much more complicated affair (Hoek & van Hoeken, 2003). For instance, bulimia nervosa is a disorder characterized by binge eating followed by purging. People with bulimia typically ingest large quantities of food in a relatively short period and then take laxatives or induce vomiting to purge the food from their bodies. These people are caught in a cycle: They eat to ease negative emotions such as sadness and anxiety, but then concern about weight gain leads them to experience negative emotions such as guilt and self-loathing, and these emotions then lead them to purge. Anorexia nervosa is a disorder characterized by an intense fear of being fat and severe restriction of food intake. People with anorexia tend to have a distorted body image that leads them to believe they are fat when they are actually emaciated, and they tend to be high-achieving perfectionists who see their severe control of eating as a triumph of will over impulse. Contrary to what you might expect, people with anorexia have extremely high levels of ghrelin in their blood, which suggests that their bodies are trying desperately to switch hunger on but that hunger's call is being suppressed, ignored, or overridden (Ariyasu et al., 2001). Like most eating disorders, anorexia strikes more women than men, and 40% of newly identified cases of anorexia are among females who are 15 to 19 years old. Anorexia may have both cultural and biological causes. For example, women with anorexia typically believe that thinness equals beauty, and it isn't hard to understand why. The average American woman is 5′4″ tall and weighs 140 pounds, but the average American fashion model is 5′11″ tall and weighs 117 pounds. But anorexia is not just "vanity run amok" (Striegel-Moore & Bulik, 2007). Many researchers believe that there are as-yet-undiscovered biological and/or genetic components to the illness as well. For example, although anorexia primarily affects women, men have a sharply increased risk of becoming anorexic if they have a female twin who has the disorder (Procopio & Marriott, 2007), suggesting that anorexia may have something to do with prenatal exposure to female hormones.

Describe how terror-management theory and the resulting mortality-salience hypothesis predict behavior motivated to alleviate death-related anxiety

For example, all animals strive to stay alive, but only human beings realize that this striving is ultimately in vain and that death is life's inevitable end. We and we alone know that every breath we take brings us just a little bit closer to our own demise. Some psychologists have suggested that this knowledge creates a sense of "existential terror" and that much of our behavior is merely an attempt to manage it. According to terror management theory, one of the ways that people cope with their existential terror is by developing a cultural worldview—a shared set of beliefs about what is good and right and true (Greenberg, Solomon, & Arndt, 2008; Solomon et al., 2004). These beliefs allow people to see themselves as more than mortal animals because they inhabit a world of meaning in which they can achieve symbolic immortality (e.g., by leaving a great legacy or having children) and perhaps even literal immortality (e.g., by being pious and earning a spot in the afterlife). According to this theory, our cultural worldview is a shield that buffers us against the anxiety that knowledge of our own mortality creates. Terror management theory gives rise to the mortality-salience hypothesis, which is the prediction that people who are reminded of their own mortality will work to reinforce their cultural worldviews. In the last 20 years, this hypothesis has been supported by nearly 400 studies. The results show that when people are reminded of death (often in very subtle ways, such as by flashing the word death for just a few milliseconds in a laboratory or by stopping people on a street corner that happens to be near a graveyard), they are more likely to praise and reward those who share their cultural worldviews and to derogate and punish those who don't. These responses are presumably ways of shoring up one's cultural worldview and thereby defending against the anxiety that reminders of one's own mortality naturally elicit. The motivation to avoid the anxiety associated with death is just one of dozens—perhaps even hundreds—of psychological motives that researchers have identified (Fiske, 2009; Shah & Gardner, 2008). We are motivated to like ourselves (Kwan et al., 2008; Sedikides & Gregg, 2008), to know ourselves (North & Swann, 2009; Swann et al., 2003), to belong to groups (Leary et al., 2008), to control our fates (Thompson et al., 2008), to achieve our goals (Conroy et al., 2009), and so on. Indeed, the list of psychological motives is so long that it is hard to tell when it will end. Is there a sensible way to organize this list?

Explain how hunger arises, noting the functions of signals to eat (ghrelin) and to stop eating (leptin); discuss the role of the lateral hypothalamus and the ventromedial hypothalamus as hunger and hunger-satiety centers.

For example, ghrelin, a hormone that is produced in the stomach, appears to be a signal that tells the brain to switch hunger on When people are injected with ghrelin, they become intensely hungry and eat about 30% more than usual (Wren et al., 2001). Interestingly, ghrelin also binds to neurons in the hippocampus and temporarily improves learning and memory (Diano et al., 2006) so that we become just a little bit better at locating food when our bodies need it most. Leptin, a chemical secreted by fat cells, appears to be a signal that tells the brain to switch hunger off. It seems to do this by making food less rewarding (Farooqi et al., 2007). People who are born with a leptin deficiency have trouble controlling their appetites (Montague et al., 1997). For example, in 2002 medical researchers reported on the case of a 9-year-old girl who weighed 200 pounds, but after just a few leptin injections, she reduced her food intake by 84% and attained normal weight (Farooqi et al., 2002). The lateral hypothalamus receives orexigenic signals and when it is destroyed, animals sitting in a cage full of food will starve themselves to death. The ventromedial hypothalamus receives anorexigenic signals and when it is destroyed, animals will gorge themselves to the point of illness and obesity (Miller, 1960; Steinbaum & Miller, 1965). These two structures were once thought to be the "hunger center" and "satiety center" of the brain, but recent research has shown that this view is far too simple (Woods et al., 1998). Hypothalamic structures play an important role in turning hunger on and off, but the precise way in which they execute these functions is complex and remains poorly understood (Stellar & Stellar, 1985). The hypothalamus comprises many parts. In general, the lateral hypothalamus receives the signals that turn hunger on and the ventromedial hypothalamus receives the signals that turn hunger off.

Give an example of a culturally dependent emotional display rule, and describe how universal facial expressions might thwart attempts to conform to these rules.

Given how important emotional expressions are, it's no wonder people have learned to use them to their advantage. Because you can control most of the muscles in your face, you don't have to display the emotion you are actually feeling. When your roommate makes a sarcastic remark about your haircut, you may make the facial expression for contempt (accompanied, perhaps, by a reinforcing hand gesture), but when your boss makes the same remark, you probably swallow hard and display a pained smile. Your expressions are moderated by your knowledge that it is permissible to show contempt for your peers but not for your superiors. Display rules are norms for the control of emotional expression (Ekman, 1972; Ekman & Friesen, 1968). People in different cultures follow different display rules. For example, in one study, Japanese and American college students watched an unpleasant film of car accidents and amputations (Ekman, 1972; Friesen, 1972). When they didn't know that the experimenters were observing them, Japanese and American students made similar expressions of disgust, but when they realized that they were being observed, the Japanese students (but not the American students) masked their disgust with pleasant expressions. Many Asian societies have a cultural norm against displaying negative emotions in the presence of a respected person, and people in these societies may mask or neutralize their expressions. Of course, our attempts to obey our culture's display rules don't always work. Darwin (1899/2007) noted that "those muscles of the face which are least obedient to the will, will sometimes alone betray a slight and passing emotion" (p. 64). Anyone who has ever watched the loser of a beauty pageant congratulate the winner knows that voices, bodies, and faces are "leaky" instruments that often betray a person's emotional state even when he or she is pretending to feel something else. For example, even when people smile bravely to mask their disappointment, their faces tend to express small bursts of disappointment that last just 1/5 to 1/25 of a second (Porter & ten Brinke, 2008). These "micro-expressions" happen so quickly that they are almost impossible to detect with the naked eye.

Describe the trait approach to studying personality; include in your description how language classification has been used to discover core traits and Eysenck's simplified model of personality (REVIEW)

Gordon Allport (1937), one of the first trait theorists, believed people could be described in terms of traits just as an object could be described in terms of its properties. He saw a trait as a relatively stable disposition to behave in a particular and consistent way. For example, a person who keeps his books organized alphabetically in bookshelves, hangs his clothing neatly in the closet, keeps a clear agenda in a daily planner, and lists birthdays of friends and family in his calendar can be said to have the trait of orderliness. This trait consistently manifests itself in a variety of settings. The "orderliness" trait describes a person but doesn't explain his or her behavior. Why does the person behave in this way? Allport saw traits as preexisting dispositions, causes of behavior that reliably trigger the behavior. The person's orderliness, for example, is an inner property of the person that will cause the person to straighten things up and be tidy in a wide array of situations. Other personality theorists, such as Henry Murray (the originator of the TAT), suggested instead that traits reflect motives. Just as a hunger motive might explain someone's many trips to the snack bar, a need for orderliness might explain the neat closet and organized calendar (Murray & Kluckhohn, 1953). As a rule, researchers examining traits as causes have used personality inventories to measure them, whereas those examining traits as motives have more often used projective tests. The study of core traits began with an exploration of how personality is represented in the store of wisdom we call language. Generation after generation, people have described people with words, so early psychologists proposed that core traits could be discerned by finding the main themes in all the adjectives used to describe personality. In one such analysis, a painstaking count of relevant words in a dictionary of English resulted in a list of more than 18,000 potential traits (Allport & Odbert, 1936)! Attempts to narrow down the list to a more manageable set depend on the idea that traits might be related in a hierarchical pattern, with more general or abstract traits, such as neuroticism, at higher levels than more specific or concrete traits, such as quickness to anger. The highest-level traits are sometimes called dimensions or factors of personality. But how many factors are there? Different researchers have proposed different answers. Cattell (1950) proposed a 16-factor theory of personality—way down from 18,000, but still a lot—whereas others have proposed theories with far fewer basic dimensions (John, Naumann, & Soto, 2008). Hans Eysenck (1967) simplified things nicely with a model of personality with only two major traits (although he later expanded that to three). Eysenck identified one dimension that distinguished people who are sociable and active (extraverts) from those who are more introspective and quiet (introverts). He also identified a second dimension ranging from the tendency to be very neurotic or emotionally unstable to the tendency to be more emotionally stable. He believed that many behavioral tendencies could be understood in terms of their relation to these core traits.

Discuss the hormonal factors that contribute to sexual interest; describe how these hormonal factors differentially regulate sexual interest in human and nonhuman females, and discuss human gender differences in sexual interest.

HUU 8.3.4 MOTIVATION FOR SEX

Describe Maslow's hierarchy of needs.

Human beings are motivated to satisfy a variety of needs. Psychologist Abraham Maslow thought these needs formed a hierarchy, with physiological needs forming a base and self-actualization needs forming a pinnacle. He suggested that people don't experience higher needs until the needs below them have been met.

Describe the desire for consistency most people feel, noting how the foot-in-the-door technique and cognitive dissonance each stem from this desire.

If a friend told you that rabbits had just staged a coup in Antarctica and were halting all carrot exports, you probably wouldn't Google it to see if it was true. You'd know right away that your friend was joking because the statement is logically inconsistent with other things that you know are true—for example, that Antarctica does not export carrots. People evaluate the accuracy of new beliefs by assessing their consistency with old beliefs, and although this is not a foolproof method for determining whether something is true, it provides a pretty good approximation. We are motivated to be accurate, and because consistency is a rough measure of accuracy, we are motivated to be consistent as well (Cialdini, Trost, & Newsom, 1995). That motivation leaves us vulnerable to social influence. For example, the foot-in-the-door technique is a technique that involves a small request followed by a larger request (Burger, 1999). In one study (Freedman & Fraser, 1966), experimenters went to a neighborhood and knocked on doors to see if they could convince homeowners to agree to have a big ugly "Drive Carefully" sign installed in their front yards. One group of homeowners was simply asked to install the sign, and only 17% said yes. A second group of homeowners was first asked to sign a petition urging the state legislature to promote safe driving (which almost all agreed to do) and was then asked to install the ugly sign. And 55% said yes! Why would homeowners be more likely to grant two requests than one? Just imagine how the homeowners in the second group felt. They had already signed a petition stating that they thought safe driving was important, and yet they knew they didn't want to install an ugly sign in their front yards. As they wrestled with this inconsistency, they probably began to experience a feeling called cognitive dissonance, which is an unpleasant state that arises when a person recognizes the inconsistency of his or her actions, attitudes, or beliefs (Festinger, 1957). When people experience cognitive dissonance, they naturally try to alleviate it, and one way to alleviate cognitive dissonance is to change one's actions, attitudes, or beliefs in order to restore consistency among them (Aronson, 1969; Cooper & Fazio, 1984). For the homeowners, changing their minds and allowing the sign to be installed in their yards did precisely that. We are motivated to be consistent, but there are inevitably times when we just can't—for example, when we tell a friend that her new hairstyle is "daring" when it actually resembles a wet skunk after an unfortunate encounter with a snowblower. Why don't we experience cognitive dissonance under such circumstances and come to believe our own lies? Because while telling a friend that her hairstyle is daring is inconsistent with the belief that her hairstyle is hideous, it is perfectly consistent with the belief that one should be nice to one's friends. When small inconsistencies are justified by large consistencies, cognitive dissonance is reduced.

Describe the Type A behavior pattern, and link it to research on stress and cardiovascular function.

In the 1950s, cardiologists interviewed and tested 3,000 healthy middle-age men and then tracked their subsequent cardiovascular health (Friedman & Rosenman, 1974). Some of the men displayed a Type A behavior pattern, characterized by a tendency toward easily aroused hostility, impatience, a sense of time urgency, and competitive achievement strivings. Other men displayed a less driven behavior pattern (sometimes called Type B). The Type A men were identified by their answers to questions in the interview (agreeing that they walk and talk fast, work late, set goals for themselves, work hard to win, and easily get frustrated and angry at others) and also by the pushy and impatient way in which they answered the questions. In the decade that followed, men who had been classified as Type A were twice as likely to have heart attacks compared with the Type B men. A later study of stress and anger found that medical students who responded to stress with anger and hostility were three times more likely to develop premature heart disease and six times more likely to have an early heart attack than were students who did not respond with anger (Chang et al., 2002). Stress affects the cardiovascular system to some degree in everyone but is particularly harmful in people who respond to stressful events with hostility (see also Figure 15.3).

Explain how the amygdala is involved in the appraisal of emotion, and describe the fast and slow pathways that emotional information can take through the brain

It turned out that during surgery, Klüver and Bucy had accidentally damaged a brain structure called the amygdala, which plays a special role in producing emotions such as fear. Before an animal can feel fear, its brain must first decide that there is something to be afraid of. This decision is called an appraisal, which is an evaluation of the emotion-relevant aspects of a stimulus (Arnold, 1960; Ellsworth & Scherer, 2003; Lazarus, 1984; Roseman, 1984; Roseman & Smith, 2001; Scherer, 1999, 2001). Many studies have shown that the amygdala is critical to making these appraisals. For example, some researchers performed an operation on monkeys so that information entering the monkey's left eye could be transmitted to the amygdala but information entering the monkey's right eye could not (Downer, 1961). When these monkeys were allowed to see a threatening stimulus with only their left eye, they responded with fear and alarm, but when they were allowed to see the threatening stimulus with only their right eye, they were calm and unruffled. These results suggest that if visual information doesn't reach the amygdala, then its emotional significance cannot be assessed. Research on human beings has reached a similar conclusion. For example, normal people have superior memory for emotionally evocative words such as death or vomit, but people whose amygdalae are damaged (LaBar & Phelps, 1998) or who take drugs that temporarily impair neurotransmission in the amygdala (van Stegeren et al., 1998) do not. The amygdala is an extremely fast and sensitive "threat detector" that is activated even when potentially threatening stimuli (such as fearful faces) are shown at speeds so fast that people are unaware of having seen them (Whalen et al., 1998). Psychologist Joseph LeDoux (2000) mapped the route that information about a stimulus takes through the brain and found that it is transmitted simultaneously along two distinct routes: the "fast pathway," which goes from the thalamus directly to the amygdala, and the "slow pathway," which goes from the thalamus to the cortex and then to the amygdala (see Figure 8.5). This means that while the cortex is slowly using the information to conduct a full-scale investigation of the stimulus's identity and importance, the amygdala has already received the information directly from the thalamus and is making one very fast and very simple decision: "Is this a threat?" If the amygdala's answer to that question is "yes," it initiates the neural processes that ultimately produce the bodily reactions and conscious experience that we call fear.

Define motivation, and describe the two functions of emotion.

Leonardo the robot doesn't care about anything, doesn't value anything, doesn't desire anything. He can learn, but he cannot yearn. Because he doesn't have emotions, he isn't motivated in the same way that human beings are. Motivation refers to the purpose for or psychological cause of an action, and it is no coincidence that the words emotion and motivation share a common linguistic root that means "to move." Unlike robots, human beings act because their emotions move them, and emotions do this in two different ways: First, emotions provide people with information about the world, and second, emotions are the objectives toward which people strive. Let's examine each of these in turn. The first function of emotion is to provide us with information about the world. For example, people report having better lives when they are asked the question on a sunny day rather than a rainy day. Why? Because people feel happier on sunny days, and they use their happiness as information about the quality of their lives (Schwarz & Clore, 1983). People who are in good moods believe that they have a higher probability of winning a lottery than do people who are in bad moods. Why? Because people use their moods as information about the likelihood of succeeding at a task (Isen & Patrick, 1983). We all know that satisfying lives and bright futures make us feel good—so when we feel good, we conclude that our lives must be satisfying and our futures must be bright. Because the world influences our emotions, our emotions can provide information about the world (Schwarz, Mannheim, & Clore, 1988). The second function of emotion is to give us something to do with that information. People naturally prefer to experience positive rather than negative emotions; thus happiness, satisfaction, pleasure, and joy are often the goals, the ends, and the objectives toward which our behavior is aimed. The hedonic principle is the claim that people are motivated to experience pleasure and avoid pain. According to the hedonic principle, our emotional experience can be thought of as a gauge that ranges from bad to good, and our primary motivation—perhaps even our sole motivation—is to keep the needle on the gauge as close to good as possible. Even when we voluntarily do things that tilt the needle in the opposite direction, such as letting the dentist drill our teeth or waking up early for a boring class, we are doing these things because we believe that they will nudge the needle toward good in the future and keep it there longer.

Define emotional expression, describe two lines of evidence supporting the universality hypothesis for facial expressions of emotion, and list emotions that have been shown to have a universal quality.

Leonardo the robot may not be able to feel, but boy oh boy, can he smile. And wink. And nod. Indeed, one of the reasons why people who interact with him find it so hard to think of him as a machine is that Leonardo expresses emotions that he doesn't actually have. An emotional expression is an observable sign of an emotional state, and while robots have to be taught to exhibit them, human beings do it naturally. Our emotional states influence just about everything we do: from the way we talk—intonation, inflection, loudness—to the way we walk, stand, or slump. But no part of the body is more exquisitely designed for communicating emotion than the face. Underneath every face lie 43 muscles that are capable of creating more than 10,000 unique configurations. Psychologists Paul Ekman and Wallace Friesen (1968) spent years cataloguing the muscle movements of which the human face is capable and isolated 46 unique movements. Research has shown that combinations of these muscle movements are reliably related to specific emotional states (Davidson et al., 1990). For example, when we feel happy, our zygomatic major (a muscle that pulls our lip corners up) and our obicularis oculi (a muscle that crinkles the outside edges of our eyes) produce a unique facial expression: smiling (Ekman & Friesen, 1982; Frank, Ekman, & Friesen, 1993; Steiner, 1986). Of course, a language only works if everybody speaks the same one, and that fact led Darwin to develop the universality hypothesis, which suggests that emotional expressions have the same meaning for everyone. In other words, every human being naturally expresses happiness with a smile, and every human being naturally understands that a smile signifies happiness. Two lines of evidence suggest that Darwin was largely correct.

Differentiate between these and chronic stressors, and discuss how perceived control can contribute to the stressfulness of an event.

Life would be simpler if an occasional stressful event such as a wedding or a lost job was the only pressure we faced. At least each event would be limited in scope, with a beginning, a middle, and, ideally, an end. Unfortunately, though, life brings with it continued exposure to chronic stressors, sources of stress that occur continuously or repeatedly. Strained relationships, long lines at the supermarket, nagging relatives, overwork, money troubles—small stressors that might be easy to ignore if they happened only occasionally can accumulate to produce distress and illness. People who report having a lot of daily hassles also report more psychological symptoms (Kanner et al., 1981) and physical symptoms (Delongis et al., 1982), and these effects often have a greater and longer-lasting impact than major life events. Many chronic stressors are linked to particular environments. For example, features of city life—noise, traffic, crowding, pollution, and even the threat of violence—provide particularly insistent sources of chronic stress. In one study, children who attended schools under the flight path of an airport had higher blood pressure and gave up more easily when working on difficult problems compared with children of similar race, economic background, and ethnicity who attended nearby schools away from the noise (Cohen et al., 1980). Rural areas have their own chronic stressors, of course, especially isolation and lack of access to amenities such as health care. The realization that chronic stressors are linked to environments has spawned the subfield environmental psychology, the scientific study of environmental effects on behavior and health. Paradoxically, events are most stressful when there is nothing to do—no way to deal with the challenge. In classic studies of perceived control, participants were asked to solve puzzles in a room filled with noise as loud as that in classrooms under the airport flight path mentioned above (Glass & Singer, 1972). The bursts of noise hurt people's performance on the tasks. However, this dramatic decline in performance was prevented among participants who were told during the noise period that they could stop the noise just by pushing a button. They didn't actually take this option, but access to the "panic button" shielded them from the detrimental effects of the noise. Similarly, the stressful effects of crowding appear to stem from the feeling that you aren't in control—that you can't get away from the crowded conditions (Sherrod, 1974)

Describe personality inventories using self-report measures and projective measures of personality; provide examples of each type and note their strengths and weaknesses.

Psychologists have figured out ways to obtain objective data on personality without driving their subjects to distraction. The most popular technique is the self-report—a series of answers to a questionnaire that asks people to indicate the extent to which sets of statements or adjectives accurately describe their own behavior or mental state. Scales based on the content of self-reports have been devised to assess a whole range of personality characteristics, all the way from general tendencies such as overall happiness (Lyubomirsky, 2008; Lyubomirsky & Lepper, 1999) to specific ones such as responding rapidly to insults (Swann & Rentfrow, 2001) or complaining about poor service (Lerman, 2006). For example, the Minnesota Multiphasic Personality Inventory (MMPI-2), a well-researched, clinical questionnaire used to assess personality and psychological problems, consists of more than 500 descriptive statements—for example, "I often feel like breaking things," "I think the world is a dangerous place," and "I'm good at socializing"—to which the respondent answers "true," "false," or "cannot say." Researchers then combine the answers to get a score on 10 main subscales, each describing some aspect of personality. The MMPI-2 measures tendencies toward clinical problems—for example, depression, hypochondria, anxiety, paranoia, and unconventional ideas or bizarre thoughts and beliefs—as well as some general personality characteristics, such as degree of masculine and feminine gender role identification, sociability versus social inhibition, and impulsivity. The MMPI-2 also includes validity scales that assess a person's attitudes toward test taking and any tendency to try to distort the results by faking answers. Personality inventories such as the MMPI-2 are easy to administer: Just give someone a pencil and away they go. The person's scores can be calculated by computer and compared with the average ratings of thousands of other test takers. Because no interpretation of the responses is needed, biases are minimized. Of course, an accurate reading of personality will occur only if people provide honest responses—especially about characteristics that might be unflattering—and if they don't always agree or always disagree, a phenomenon known as response style. The validity scales cannot eliminate these problems altogether, but they can detect them well enough to make personality inventories a generally effective means of testing, classifying, and researching many personality characteristics. The second major class of tools for evaluating personality, the projective techniques, consist of a standard series of ambiguous stimuli designed to elicit unique responses that reveal inner aspects of an individual's personality. The developers of projective tests assumed that people would project personality factors that are out of awareness—wishes, concerns, impulses, and ways of seeing the world—onto ambiguous stimuli and would not censor these responses. If you and a friend were looking at the sky one day and she became upset because one cloud looked to her like a monster, her response might reveal more about her inner life than her answer to a direct question about her fears. Probably the best-known projective technique is the Rorschach Inkblot Test, a projective personality test in which individual interpretations of the meaning of a set of unstructured inkblots are analyzed to identify a respondent's inner feelings and interpret his or her personality structure. An example inkblot is shown in Figure 11.1. Responses are scored according to complicated systems (derived in part from research with patients) that classify what people see (Exner, 1993; Rapaport, 1946). For example, most people who look at Figure 11.1 report seeing birds or people, so someone who is unable to see obvious items when he or she responds to a blot may be having difficulty perceiving the world as others do.

Describe a number of ways in which our verbal and nonverbal behavior is altered when we lie; provide a reason why people are poor at detecting lies, and discuss the advantages and limitations of the polygraph lie detection machine.

Our emotions don't just leak on our faces: They leak all over the place. Research has shown that many aspects of our verbal and nonverbal behavior are altered when we tell a lie (DePaulo et al., 2003). For example, liars speak more slowly, take longer to respond to questions, and respond in less detail than do those who are telling the truth. Liars are also less fluent, less engaging, more uncertain, more tense, and less pleasant than truth-tellers. Oddly enough, one of the telltale signs of a liar is that his or her performances tend to be just a bit too good. A liar's speech lacks the little imperfections that are typical of truthful speech, such as superfluous detail ("I noticed that the robber was wearing the same shoes that I saw on sale last week at Bloomingdale's and I found myself wondering what he paid for them"), spontaneous correction ("He was six feet tall...well, no, actually more like six-two"), and expressions of self-doubt ("I think he had blue eyes, but I'm really not sure"). studies show that human lie detection ability is pretty close to dreadful. Although trained professionals can learn to do it fairly well (Ekman & O'Sullivan, 1991; Ekman, O'Sullivan, & Frank, 1999), under most conditions ordinary people do barely better than chance (DePaulo, Stone, & Lassiter, 1985; Ekman, 1992; Zuckerman, DePaulo, & Rosenthal, 1981; Zuckerman & Driver, 1985). One reason for this is that people have a strong bias toward believing that others are sincere, which explains why people tend to mistake liars for truth-tellers more often than they mistake truth-tellers for liars (Gilbert, 1991). When people can't do something well (e.g., adding numbers or picking up 10-ton rocks), they typically turn the job over to machines (see Figure 8.9). Can machines detect lies better than we can? The answer is "yes," but that's not saying much. The most widely used lie detection machine is the polygraph, which measures a variety of physiological responses that are associated with stress, which people often feel when they are afraid of being caught in a lie. In fact, the machine is so widely used by governments and businesses that the National Research Council recently met to consider all the scientific evidence on its validity. After much study, it concluded that the polygraph can indeed detect lies at a rate that is significantly better than chance (National Research Council, 2003). However, it also concluded that "almost a century of research in scientific psychology and physiology provides little basis for the expectation that a polygraph test could have extremely high accuracy" (p. 212). In short, neither people nor machines are particularly good at lie detection, which is why lying remains such a popular sport.

Discuss the problem of obesity, and give three reasons why people tend to overeat.

Overeating can result from biochemical abnormalities. For example, obese people are often leptin-resistant—that is, their brains do not respond to the chemical message that shuts hunger off—and even leptin injections don't help them (Friedman & Halaas, 1998; Heymsfield et al., 1999). For such people, the urge to eat is incredibly compelling, and they can't "just decide" to stop eating any more than you could just decide to stop breathing (Friedman, 2003). We often eat even when we aren't really hungry. For example, we may eat to reduce negative emotions such as sadness or anxiety, we may eat out of habit ("I always have ice cream at night"), and we may eat out of social obligation ("Everyone else is ordering dessert") (Herman, Roth, & Polivy, 2003). Sometimes we eat simply because the clock tells us that we should (Schachter & Gross, 1968). (See the Real World box below.) Nature designed us to overeat. For most of our evolutionary history, the main food-related problem facing our ancestors was starvation. As a result, we developed a strong attraction to foods that provide large amounts of energy (calories) per bite—which is why most of us prefer hamburgers and milkshakes to celery and water. We also developed an ability to store excess food energy in the form of fat, which enabled us to eat more than we needed when food was plentiful and then live off our reserves when food was scarce. We are beautifully engineered for a world in which food is generally low-cal and scarce, but the problem is that we don't live in that world anymore. Instead, we live in a world in which the calorie-laden miracles of modern technology—from chocolate cupcakes to sausage pizzas—are inexpensive and readily available.

Compare conformity and obedience, and describe a classic experiment in each area.

People can influence us by invoking familiar norms. But if you've ever found yourself sneaking a peek at the diner next to you, hoping to discover whether the little fork is supposed to be used for the shrimp or the salad, then you know that other people can also influence us by defining new norms in ambiguous, confusing, or novel situations. Conformity is the tendency to do what others do simply because others are doing it. In a classic study, participants sat in a room with seven other people who appeared to be ordinary participants but who were actually actors (Asch, 1951, 1956). An experimenter explained that the participants would be shown cards with three printed lines and that their job was simply to say which of the three lines matched a "standard line" that was printed on another card (Figure 12.10). The experimenter held up a card and then asked each person to answer in turn. The real participant was among the last to be called on. Everything was normal on the first two trials, but on the third trial, something odd happened: The actors all began giving the same wrong answer! What did the real participants do? Seventy-five percent of them conformed and announced the wrong answer on at least one trial. Subsequent research has shown that these participants didn't actually misperceive the length of the lines but were instead succumbing to normative influence (Asch, 1955; Nemeth & Chiles, 1988). Giving the wrong answer was apparently the right thing to do, and so participants did it. In most situations there are a few people whom we all recognize as having special authority both to define the norms and to enforce them. The usher at a movie theater may be an underpaid high school student who isn't allowed to drink, drive, vote, or stay up past 10:00 p.m. on a school night, but in the context of the theater, the usher is the authority. So when the usher asks you to take your feet off the seat in front of you, you obey. Obedience is the tendency to do what powerful people tell us to do. Why do we obey powerful people? Well, yes, sometimes they have guns. But while powerful people are often capable of rewarding and punishing us, research shows that much of their influence is normative (Tyler, 1990). Psychologist Stanley Milgram (1963) demonstrated this in one of psychology's most infamous experiments. The participants in this experiment met a middle-aged man who was introduced as another participant but who was actually a trained actor. An experimenter in a lab coat explained that the participant would play the role of teacher and the actor would play the role of learner. The teacher and learner would sit in different rooms, the teacher would read words to the learner over a microphone, and the learner would then repeat the words back to the teacher. If the learner made a mistake, the teacher would press a button that delivered an electric shock to the learner (Figure 12.12). The shock-generating machine (which wasn't actually hooked up, of course) offered 30 levels of shock, ranging from 15 volts (labeled "slight shock") to 450 volts (labeled "Danger: severe shock").... we know the rest

Describe how both major life events and minor hassles can serve as stressors

People often seem to get sick after major life events (Holmes & Rahe, 1967). In fact, simply assigning points to a person's life changes and adding them up provides a significant indicator of the person's future illness (Miller, 1996). A person who gets divorced and loses a job and has a friend die all in a year, for example, is more likely to get sick than one who escapes the year with only a divorce.

Define personality, noting how it involves thought, feeling, and behavior, and explain the difference between describing and explaining personality

Personality is an individual's characteristic style of behaving, thinking, and feeling. 11.1.1

Describe the causes and characteristics of post-traumatic stress disorder (PTSD).

Psychological reactions to stress can lead to stress disorders. For example, a person who lives through a terrifying and uncontrollable experience may develop post-traumatic stress disorder (PTSD), a disorder characterized by chronic physiological arousal, recurrent unwanted thoughts or images of the trauma, and avoidance of things that call the traumatic event to mind. For example, many soldiers returning from combat have PTSD symptoms, including flashbacks of battle, exaggerated anxiety and startle reactions, and even medical conditions that do not arise from physical damage (e.g., paralysis or chronic fatigue). Such symptoms are normal, appropriate responses to horrifying events; for most people, the symptoms subside with time. In PTSD, the symptoms can last much longer. For example, the Centers for Disease Control (1988) found that even 20 years after the Vietnam War, 15% of veterans who had seen combat continued to report lingering symptoms. This long-term psychological response is now recognized not only among the victims, witnesses, and perpetrators of war but also among ordinary people who are traumatized by terrible events. At some time over the course of their lives, about 8% of Americans are estimated to suffer from PTSD (Kessler et al., 1995).

Describe the two factors in Schachter and Singer's two-factor theory of emotion, and note how aspects of the theory has been both supported and contradicted by subsequent research.

Schachter and Singer's two-factor theory of emotion claimed that emotions are inferences about the causes of physiological arousal (see Figure 8.3). So when you see a bear in your kitchen, your heart begins to pound. Your brain quickly scans the environment, looking for a reasonable explanation for all that pounding, and notices, of all things, a bear. Having noticed both a bear and a pounding heart, your brain then does what brains do so well: It puts two and two together, makes a logical inference, and interprets your arousal as fear. In other words, when you are physiologically aroused in the presence of something you think should scare you, you label that arousal as fear. But if you have precisely the same bodily response in the presence of something you think should delight you, then you label that arousal as excitement.

Explain the hedonic motive of social influence.

Social influence is the ability to control another person's behavior (Cialdini & Trost, 1998). But how does it work? If you want people to give you their time, money, allegiance, or affection, you'd be wise to consider first what it is they want. People have three basic motivations that make them susceptible to social influence (Bargh, Gollwitzer, & Oettingen, 2010). People are motivated to experience pleasure and to avoid experiencing pain (the hedonic motive), they are motivated to be accepted and to avoid being rejected (the approval motive), and they are motivated to believe what is right and to avoid believing what is wrong (the accuracy motive). As you will see, most social influence attempts appeal to one or more of these motives. If there is an animal that prefers pain to pleasure, it must be very good at hiding because scientists have never seen it. Pleasure-seeking is the most basic of all motives, and social influence often involves creating situations in which others can achieve more pleasure by doing what we want them to do than by doing something else. Parents, teachers, governments, and businesses often try to influence our behavior by offering rewards and threatening punishments (see Figure 12.8). There's nothing mysterious about how these influence attempts work, and they are often quite effective. When the Republic of Singapore warned its citizens that anyone caught chewing gum in public would face a year in prison and a $5,500 fine, the rest of the world seemed either outraged or amused. When all the criticism and chuckling subsided, though, it was hard to ignore the fact that the incidence of felonious gum-chewing in Singapore had fallen to an all-time low. You'll recall from Chapter 6 that even a sea slug will repeat behaviors that are followed by rewards and avoid behaviors that are followed by punishments. Although the same is generally true of human beings, there are some instances in which rewards and punishments can backfire. For example, children in one study were allowed to play with colored markers and then some were given a "Good Player Award." When the children were given markers the next day, those who had received an award were less likely to play with them than were those who had not received an award (Lepper, Greene, & Nisbett, 1973). Why? Because children who had received an award the first day came to think of drawing as something one did to receive rewards, and if no one was going to give them an award, then why should they do it (Deci, Koestner, & Ryan, 1999)? Similarly, reward and punishment can backfire simply because people don't like to feel manipulated. Researchers placed signs in two restrooms on a college campus—one reading "Please don't write on these walls" and another reading "Do not write on these walls under any circumstances." Two weeks later, the walls in the second restroom had more graffiti than the walls in the first restroom did, presumably because students didn't appreciate the threatening tone of the second sign and wrote on the walls just to prove to themselves that they could (Pennebaker & Sanders, 1976).

Provide evidence for the interactive role of genes and the environment in the development of personality traits; discuss how an evolutionary perspective might account for gender differences in some personality traits.

Some of the most compelling evidence for the importance of biological factors in personality comes from the domain of behavioral genetics. Simply put, the more genes you have in common with someone, the more similar your personalities are likely to be. In one review of studies involving more than 24,000 twin pairs, for example, identical twins, who share the same genes, proved markedly more similar to each other in personality than did fraternal twins, who share on average only half their genes (Loehlin, 1992). And although environment and life experiences also help shape personality, identical twins reared apart in adoptive families end up at least as similar in personality as those who grew up together (McGue & Bouchard, 1998; Tellegen et al., 1988). It's unlikely that a specific gene controls neuroticism or extraversion or any other personality factor. Rather, more genes interacting may produce a specific physiological characteristic such as a tendency for extraversion. This biological factor may then shape the person's behavior, leading him to be more likely to chat with strangers at a party than someone whose genes produce a tendency for introversion. From an evolutionary perspective, differences in personality reflect alternative adaptations that species—human and nonhuman—have evolved to deal with the challenges of survival and reproduction. For example, if you were to hang around a bar for an evening or two, you would soon see that humans have evolved more than one way to attract and keep a mate. People who are extraverted would probably show off to attract attention, whereas you'd be likely to see people high in agreeableness displaying affection and nurturance (Buss, 1996). Both approaches might work well to attract mates and reproduce successfully—depending on the environment. Through this process of natural selection, those characteristics that have proved successful in our evolutionary struggle for survival have been passed on to future generations (see the Real World Box).

Describe four ways in which stereotypes, while useful, sometimes produce harmful consequences

Stereotypes can be inaccurate: The inferences we draw about individuals are only as accurate as our stereotypes about the categories to which they belong. Although there was no evidence to indicate that Jews were especially materialistic or that African Americans were especially lazy, American college students held such beliefs for most of the last century (Gilbert, 1951; Karlins, Coffman, & Walters, 1969; Katz & Braly, 1933). They weren't born holding these beliefs, so how did they acquire them? There are only two ways to acquire a belief about anything: to see for yourself or to take somebody else's word for it. In fact, most of what we know about the members of human categories is hearsay—stuff we picked up from friends and uncles, from novels and newspapers, from jokes and movies and late-night television. Many of the people who believe stereotypes about Jews or African Americans have never actually met someone who is Jewish or African American, and their beliefs are a result of listening too closely to what others told them. In the process of inheriting the wisdom of our culture, it is inevitable that we also will inherit its ignorance too. Stereotypes can be overused: Because all thumbtacks are pretty much alike, our beliefs about thumbtacks ("small, cheap, painful when chewed") are quite useful, and we will rarely be mistaken if we generalize from one thumbtack to another. Human categories, however, are so variable that our stereotypes may offer only the vaguest of clues about the individuals who populate those categories. You probably believe that men have greater upper body strength than women do, and this belief is right on average. But the upper body strength of individuals within each of these categories is so varied that you cannot easily predict how much weight a particular person can lift simply by knowing that person's gender. The inherent variability of human categories makes stereotypes much less useful than they might otherwise be. Alas, we don't always recognize this because the mere act of categorizing a stimulus tends to warp our perceptions of that category's variability. For instance, we all identify colors as members of categories such as blue or green, and this leads us to overestimate the similarity of colors that share a category label and to underestimate the similarity of colors that do not. That's why we see discrete bands of color when we look at rainbows, which are actually a smooth continuum of colors (see Figure 12.13 and the Hot Science box). That's also why we tend to underestimate the distance between cities that are in the same country, such as Memphis and Pierre, and overestimate the distance between cities that are in different countries, such as Memphis and Toronto (Burris & Branscombe, 2005). What's true of colors and distances is true of people as well. The mere act of categorizing people as Blacks or Whites, Jews or Gentiles, artists or accountants, can cause us to underestimate the variability within those categories ("All artists are wacky") and to overestimate the variability between them ("Artists are much wackier than accountants"). When we underestimate the variability of a human category, we overestimate how useful our stereotypes can be. Stereotypes can be self perpetuating: When we meet a man who likes ballet more than football or a senior citizen who likes hip-hop more than easy-listening, why don't we recognize that our stereotypes are inaccurate? Stereotypes are a bit like viruses: Once they take up residence inside us, they perpetuate themselves and resist even our most concerted efforts to eradicate them. Stereotypes are self-perpetuating for three reasons. First, we have a tendency to see what we expect to see. In one study, participants listening to a radio broadcast of a basketball game were asked to evaluate the performance of one of the players. Those participants who had previously been led to believe that the player was African American thought he had exhibited more athletic ability than those who had been led to believe that he was White (Stone, Perry, & Darley, 1997). Participants' stereotypes led them to expect different performances from athletes of different racial origins—and they perceived just what they expected. A second reason why stereotypes are self-perpetuating is that we can cause what we expect to see. In one study (Steele & Aronson, 1995), African American and White students were given a test, and half the students in each group were asked to list their race at the top of the exam. Students who were not asked to list their race performed well; but when students were asked to list their races, African American students became anxious and performed poorly (Figure 12.14). Stereotypes perpetuate themselves in part by causing the stereotyped individual to behave in ways that confirm the stereotype. Finally, stereotypes can be self-perpetuating because, when faced with evidence to the contrary, we tend to modify our stereotypes rather than abandon them (Weber & Crocker, 1983). For example, most people believe that public relations agents are sociable; in one study, though, when participants learned about a PR agent who was extremely unsociable, they tended to consider him as an "exception to the rule" and thereby preserve their stereotypes about PR agents in general (Kunda & Oleson, 1997). Stereotypes can be automatic: If stereotypes are inaccurate and self-perpetuating, then why don't we just stop using them? The answer is that stereotyping happens unconsciously (which means that we don't always know we are doing it) and automatically (which means that we often cannot avoid doing it them even when we try) (Banaji & Heiphetz, 2010; Greenwald, McGhee, & Schwartz, 1998; Greenwald & Nosek, 2001). For example, in one study, photos of Black or White men holding guns or cameras were flashed on a computer screen for less than 1 second each. Participants earned money by pressing a button labeled "shoot" whenever the man on the screen was holding a gun but lost money if they shot a man holding a camera. The participants made some mistakes, of course, but the kinds of mistakes they made were quite disturbing: Participants were more likely to shoot a man holding a camera when he was Black and less likely to shoot a man holding a gun when he was White (Correll et al., 2002). Although the photos appeared on the screen so quickly that participants did not have enough time to consciously consult their stereotypes, those stereotypes worked unconsciously, causing them to mistake a camera for a gun when it was in the hands of a Black man and a gun for a camera when it was in the hands of a White man. Interestingly, Black participants were just as likely to make this pattern of errors as White participants. Although stereotyping is unconscious and automatic, it is not inevitable (Blair, 2002; Kawakami et al., 2000; Milne & Grafman, 2001; Rudman, Ashmore, & Gary, 2001). For instance, police officers who receive special training before participating in the "camera or gun" experiment described earlier do not show the same biases that ordinary people do (Correll et al., 2007). Like ordinary people, they take a few milliseconds longer to decide not to shoot a Black man than a White man, indicating that stereotypes influenced their thinking. But unlike ordinary people, they don't actually shoot Black men more often than White men, indicating that they have learned how to keep their stereotypes from influencing their behavior.

. Compare intrinsic and extrinsic motives, and note how rewards and punishers can modulate the effectiveness of these motivations.

Taking a psychology exam is not like eating a French fry. One makes you tired and the other makes you fat, one requires that you move your lips and the other requires that you don't, and so on. But the key difference between these activities is that one is a means to an end and one is an end in itself. An intrinsic motivation is a motivation to take actions that are themselves rewarding. When we eat a French fry because it tastes good, scratch an itch because it feels good, or listen to music because it sounds good, we are intrinsically motivated. These activities don't have a payoff because they are a payoff. Conversely, an extrinsic motivation is a motivation to take actions that lead to reward. When we floss our teeth so we can avoid gum disease (and get dates), when we work hard for money so we can pay our rent (and get dates), and when we take an exam so we can get a college degree (and get money to get dates), we are extrinsically motivated. None of these things directly brings pleasure, but all may lead to pleasure in the long run. Extrinsic motivation gets a bad rap. Americans tend to believe that people should "follow their hearts" and "do what they love," and we feel sorry for students who choose courses just to please their parents and for parents who choose jobs just to earn a pile of money. But the fact is that our ability to engage in behaviors that are unrewarding in the present because we believe they will bring greater rewards in the future is one of our species' most significant talents, and no other species can do it quite as well as we can (Gilbert, 2006). In research on the ability to delay gratification (Ayduk et al., 2007; Mischel et al., 2004), people are typically faced with a choice between getting something they want right now (e.g., a scoop of ice cream) or waiting and getting more of what they want later (e.g., two scoops of ice cream). Studies show that 4-year-old children who can delay gratification are judged to be more intelligent and socially competent 10 years later and that they have higher SAT scores when they enter college (Mischel, Shoda, & Rodriguez, 1989). In fact, the ability to delay gratification is a better predictor of a child's grades in school than is the child's IQ (Duckworth & Seligman, 2005). Apparently there is something to be said for extrinsic motivation. There is a lot to be said for intrinsic motivation too. People work harder when they are intrinsically motivated, they enjoy what they do more, and they do it more creatively. Both kinds of motivation have advantages, which is why many of us try to build lives in which we are both intrinsically and extrinsically motivated by the same activity—lives in which we are paid the big bucks for doing exactly what we like to do best. Who hasn't fantasized about becoming an artist or an athlete or Lady Gaga's personal party planner? Alas, research suggests that it is difficult to get paid for doing what you love and still end up loving what you do because extrinsic rewards can undermine intrinsic interest (Deci, Koestner, & Ryan, 1999; Henderlong & Lepper, 2002). For example, in one study, college students who were intrinsically interested in a puzzle either were paid to complete it or completed it for free, and those who were paid were less likely to play with the puzzle later on (Deci, 1971). It appears that under some circumstances people take rewards to indicate that an activity isn't inherently pleasurable ("If they had to pay me to do that puzzle, it couldn't have been a very fun one"); thus rewards can cause people to lose their intrinsic motivation. Just as rewards can undermine intrinsic motivation, punishments can create it. In one study, children who had no intrinsic interest in playing with a toy suddenly gained an interest when the experimenter threatened to punish them if they touched it (Aronson, 1963). And when a group of day-care centers got fed up with parents who arrived late to pick up their children, some of them instituted a financial penalty for tardiness. As Figure 8.12 shows, the financial penalty caused an increase in late arrivals (Gneezy & Rustichini, 2000). Why? Because parents are intrinsically motivated to fetch their kids, and they generally do their best to be on time. But when the day-care centers imposed a fine for late arrival, the parents became extrinsically motivated to fetch their children—and because the fine wasn't particularly large, they decided to pay a small financial penalty in order to leave their children in day care for an extra hour. When threats and rewards change intrinsic motivation into extrinsic motivation, unexpected consequences can follow.

. Compare approach and avoidance motives and their relative strengths; provide an example of how each type of motivation can direct our behavior.

The author James Thurber (1956) wrote "All men should strive to learn before they die/what they are running from, and to, and why." The hedonic principle describes two conceptually distinct motivations: a motivation to "run to" pleasure and a motivation to "run from" pain. These motivations are what psychologists call an approach motivation, which is a motivation to experience a positive outcome, and an avoidance motivation, which is a motivation not to experience a negative outcome. Pleasure is not just the lack of pain, and pain is not just the lack of pleasure. They are independent experiences that occur in different parts of the brain (Davidson et al., 1990; Gray, 1990). Research suggests that, all else being equal, avoidance motivations tend to be more powerful than approach motivations. Most people will turn down a chance to bet on a coin flip that would pay them $10 if it came up heads but would require them to pay $8 if it came up tails because they believe that the pain of losing $8 will be more intense than the pleasure of winning $10 (Kahneman & Tversky, 1979). On average, avoidance motivation is stronger than approach motivation, but the relative strength of these two tendencies does differ somewhat from person to person. Table 8.3 shows a series of questions that have been used to measure the relative strength of a person's approach and avoidance tendencies (Carver & White, 1994). Research shows that people who are described by the high-approach items are happier when rewarded than those who are not and that those who are described by the high-avoidance items are more anxious when threatened than those who are not (Carver, 2006). Just as some people seem to be more responsive to rewards than to punishments (and vice versa), some people tend to think about their behavior as attempts to get reward rather than to avoid punishment (and vice versa).

Describe the HPA axis and its functioning in stress and the fight-or-flight response.

The fight-or-flight response is an emotional and physiological reaction to an emergency that increases readiness for action. The mind asks, "Should I stay and battle this somehow, or should I run like mad?" And the body prepares to react. If you're a cat at this time, your hair stands on end. If you're a human, your hair stands on end, too, but not as visibly. Brain activation in response to threat occurs in the hypothalamus, which initiates a cascade of bodily responses that include stimulation of the pituitary gland, which in turn causes stimulation of the adrenal glands. This pathway, shown in Figure 15.1, is sometimes called the HPA axis (for hypothalamus, pituitary, adrenal). The adrenal glands release hormones, including the catecholamines (epinephrine and norepinephrine), which increase sympathetic nervous system activation (and therefore increase heart rate, blood pressure, and respiration rate) and decrease parasympathetic activation (see Chapter 3). The increased respiration and blood pressure make more oxygen available to the muscles to energize attack or to initiate escape. The adrenal glands also release cortisol, a hormone that increases the concentration of glucose in the blood to make fuel available to the muscles. Everything is prepared for a full-tilt response to the threat.

Explain how and why stress affects responses of the immune system

The immune system is a complex response system that protects the body from bacteria, viruses, and other foreign substances. The immune system is remarkably responsive to psychological influences. Stressors can cause hormones known as glucocorticoids to flood the brain, wearing down the immune system and making it less able to fight invaders (Webster Marketon & Glaser, 2008). For example, in one study, medical student volunteers agreed to receive small wounds to the roof of the mouth. Researchers observed that these wounds healed more slowly during exam periods than during summer vacation (Marucha, Kiecolt-Glaser, & Favagehi, 1998).

Discuss how primary appraisal and secondary appraisal operate in the interpretation of stress.

The interpretation of a stimulus as stressful or not is called primary appraisal (Lazarus & Folkman, 1984). Primary appraisal allows you to realize that a small dark spot on your shirt is a stressor ("Spider!") or that a 70-mile-per-hour drop from a great height in a small car full of screaming people may not be ("Roller coaster!"). The next step in interpretation is secondary appraisal—determining whether the stressor is something you can handle—that is, whether you have control over the event (Lazarus & Folkman, 1984). Interestingly, the body responds differently depending on whether the stressor is perceived as a threat (a stressor you believe you might not be able to overcome) or a challenge (a stressor you feel fairly confident you can control) (Blascovich & Tomaka, 1996). The same midterm exam could be a challenge if you are well prepared and a threat if you had neglected to study. Although both threats and challenge raise heart rate, threats also cause constriction of the blood vessels, which can lead to high blood pressure.

Explain the attribution process, and distinguish between situational attributions and dispositional attributions

To understand people, we need to know not only what they did but also why they did it. Is the batter who hit the home run a talented slugger, or was the wind blowing in just the right direction? Is the politician who gave the pro-life speech really opposed to abortion, or was she just trying to win the conservative vote? When we answer questions such as these, we are making attributions, which are inferences about the causes of people's behaviors (Epley & Waytz, 2010; Gilbert, 1998). We make situational attributions when we decide that a person's behavior was caused by some temporary aspect of the situation in which it happened ("He was lucky that the wind carried the ball into the stands"), and we make dispositional attributions when we decide that a person's behavior was caused by his or her relatively enduring tendency to think, feel, or act in a particular way ("He's got a great eye and a powerful swing").

. List the Big Five personality dimensions, provide examples of each, and discuss some surface indicators of personality

Today many personality researchers agree that personality is best captured by 5 factors rather than by 2, 3, 16, or 18,000 (John & Srivastava, 1999; McCrae & Costa, 1999). The Big Five, as they are affectionately called, are the traits of the five-factor model of personality: conscientiousness, agreeableness, neuroticism, openness to experience, and extraversion (see Table 11.1). (Remember them by the initials CANOE.)

Define the process of emotion regulation, and explain how reappraisal is a primary means of regulating our emotional states.

We may not care whether we have cereal or eggs for breakfast, whether we play cricket or cards this afternoon, or whether we spend a few minutes thinking about hedgehogs, earwax, or the War of 1812. But we always care whether we are feeling happy or fearful, angry or relaxed, joyful or disgusted. Because we care so much about our emotional experiences, we work hard to have some and avoid others. Emotion regulation refers to the cognitive and behavioral strategies people use to influence their own emotional experience. reappraisal, which involves changing one's emotional experience by changing the meaning of the emotion-eliciting stimulus (Ochsner et al., 2009ok). For example, in one study, participants' brains were scanned as they saw photos that induced negative emotions, such as a photo of a woman crying during a funeral. Some participants were then asked to reappraise the picture, for example, by imagining that the woman in the photo was at a wedding rather than a funeral. The results showed that when participants initially saw the photo, their amygdalae became active. But as they reappraised the picture, several key areas of the cortex became active, and moments later, their amygdalae were deactivated (Ochsner et al., 2002). In other words, participants consciously and willfully turned down the activity of their own amygdalae simply by thinking about the photo in a different way.

Name the three phases of the general adaptation syndrome.

What might have happened to Three Mile Island's neighbors if the sirens had wailed again and again for days or weeks at a time? Canadian physician Hans Selye subjected rats to heat, cold, infection, trauma, hemorrhage, and other prolonged stressors; he made few friends among the rats but found physiological responses that included an enlarged adrenal cortex, shrinking of the lymph glands, and ulceration of the stomach. Noting that many different kinds of stressors caused similar patterns of physiological change, he called the reaction the general adaptation syndrome (GAS), a three-stage physiological stress response that appears regardless of the stressor that is encountered. The first phase of the GAS is the alarm phase, in which the body displays the fight-or-flight response, rapidly mobilizing its resources to respond to the threat (see Figure 15.2). Energy is required, so the body calls on its stored fat and muscle. Next, in the resistance phase, the body adapts to its high state of arousal as it tries to cope with the stressor. It shuts down unnecessary processes, such as digestion, growth, and sex drive. If the resistance phase goes on long enough, the third phase, exhaustion, sets in, which can include susceptibility to infection, tumor growth, aging, irreversible organ damage, or death.

Illustrate the approval motive of social influence by describing normative influence and noting how the norm of reciprocity is involved in the door-in-the-face technique.

When getting on an elevator, you are supposed to face forward, and you shouldn't talk to the person next to you unless you are the only two people on the elevator. Although no one ever taught you this rule, you probably picked it up somewhere along the way. The unwritten rules that govern social behavior are called norms, which are customary standards for behavior that are widely shared by members of a culture (Miller & Prentice, 1996). Normative influence occurs when another person's behavior provides information about what is appropriate (see Figure 12.9). For example, every human culture has a norm of reciprocity, which is the unwritten rule that people should benefit those who have benefited them (Gouldner, 1960). When a friend buys you lunch, you return the favor; if you don't, your friend gets miffed. Indeed, the norm of reciprocity is so strong that waiters and waitresses get bigger tips when they give customers a piece of candy along with the bill because customers feel obligated to do "a little extra" for those who have done "a little extra" for them (Strohmetz et al., 2002). The norm of reciprocity always involves swapping, but the swapping doesn't always involve favors. The door-in-the-face technique is a strategy that uses reciprocating concessions to influence behavior. Here's how it works: You ask someone for something more valuable than you really want, you wait for that person to refuse (to "slam the door in your face"), and then you ask the person for what you really want. In one study, researchers asked college students to volunteer to supervise adolescents who were going on a field trip, and only 17% of the students agreed. But when the researchers first asked students to commit to spending 2 hours per week for 2 years working at a youth detention center (to which every one of the students said "no") and then asked them if they'd be willing to supervise the field trip, 50% of the students agreed (Cialdini et al., 1975). Why? The norm of reciprocity. The researchers began by asking for a large favor, which the student refused. Then the researchers made a concession by asking for a smaller favor. Because the researchers made a concession, the norm of reciprocity demanded that the student make one too—and half of them did!

Compare conscious and unconscious motives, including the need for achievement, and discuss how task difficulty is related to consciousness of our motivations.

When prizewinning artists or scientists are asked to explain their achievements, they typically say things like, "I wanted to liberate color from form" or "I wanted to cure diabetes." They almost never say, "I wanted to exceed my father's accomplishments, thereby proving to my mother that I was worthy of her love." People clearly have conscious motivations, which are motivations of which people are aware, but they also have unconscious motivations, which are motivations of which people are not aware (Aarts, Custers, & Marien, 2008; Bargh et al., 2001; Hassin, Bargh, & Zimerman, 2009). Psychologists David McClelland and John Atkinson argued that people vary in their need for achievement, which is the motivation to solve worthwhile problems (McClelland et al., 1953). They argued that this basic motivation is unconscious. For example, when words such as achievement are presented on a computer screen so rapidly that people cannot consciously perceive them, those people will work especially hard to solve a puzzle (Bargh et al., 2001) and will feel especially unhappy if they fail (Chartrand & Kay, 2006). What determines whether we are conscious of our motivations? Most actions have more than one motivation, and Robin Vallacher and Daniel Wegner have suggested that the ease or difficulty of performing the action determines which of these motivations we will be aware of (Vallacher & Wegner, 1985, 1987). When actions are easy (e.g., screwing in a lightbulb), we are aware of our most general motivations (e.g., to be helpful), but when actions are difficult (e.g., wrestling with a lightbulb that is stuck in its socket), we are aware of our more specific motivations (e.g., to get the threads aligned). For example, participants in an experiment drank coffee either from a normal mug or from a mug that had a heavy weight attached to the bottom, which made it difficult to manipulate. When asked what they were doing, those who were drinking from the normal mug explained that they were "satisfying needs," whereas those who were drinking from the weighted mug explained that they were "swallowing" (Wegner et al., 1984)

Compare systematic persuasion and heuristic persuasion, and give an example of each.

When the next presidential election rolls around, two things will happen. First, the candidates will say that they intend to win your vote by making arguments that focus on the issues. Second, the candidates will then avoid arguments, ignore issues, and attempt to win your vote with a variety of cheap tricks. What the candidates promise to do and what they actually do reflect two basic forms of persuasion, which occurs when a person's attitudes or beliefs are influenced by a communication from another person (Albarracín & Vargas, 2010; Petty & Wegener, 1998). The candidates will promise to engage in systematic persuasion, which refers to the process by which attitudes or beliefs are changed by appeals to reason, but they will spend most of their time and money engaged in heuristic persuasion, which refers to the process by which attitudes or beliefs are changed by appeals to habit or emotion (Chaiken, 1980; Petty & Cacioppo, 1986). (Heuristics are simple shortcuts or "rules of thumb.") Which form of persuasion will be more effective depends on whether the person is willing and able to weigh evidence and analyze arguments. For example, in one study, university students heard a speech that contained either strong or weak arguments in favor of instituting comprehensive exams at their school (Petty, Cacioppo, & Goldman, 1981). Some students were told that the speaker was a Princeton University professor, and others were told that the speaker was a high school student—a bit of information that could be used as a shortcut to decide whether to believe the speech. Some students were told that their university was considering implementing these exams right away, motivating them to analyze the evidence; other students were told that their university was considering implementing these exams in 10 years, which gave them less motivation to analyze the evidence (because they'd presumably be long gone by the time the exams were given). When students were motivated to analyze the evidence, they were systematically persuaded—that is, their attitudes and beliefs were influenced by the strength of the arguments but not by the status of the speaker. But when students were not motivated to analyze the evidence, they were heuristically persuaded—that is, their attitudes and beliefs were influenced by the status of the speaker but not by the strength of the arguments.

Illustrate the accuracy motive of social influence by describing the role informational influence plays in shaping our attitudes and beliefs.

When you are hungry, you open the refrigerator and grab an apple because you know that apples (1) taste good and (2) are in the refrigerator. This action, like most actions, relies on both an attitude, which is an enduring positive or negative evaluation of an object or event, and a belief, which is an enduring piece of knowledge about an object or event. In a sense, our attitudes tell us what we should do ("Eat an apple") and our beliefs tell us how to do it ("Start by opening the fridge"). If our attitudes or beliefs are inaccurate—that is, if we can't tell good from bad or right from wrong—then our actions are likely to be fruitless. Because we rely so much on our attitudes and beliefs, it isn't surprising that we are motivated to have the right ones. And that motivation leaves us vulnerable to social influence. If everyone in the shopping mall suddenly ran screaming for the exit, you'd probably join them—not because you were afraid that they would otherwise disapprove of you, but because their behavior would suggest to you that there was something worth running from. Informational influence occurs when another person's behavior provides information about what is good or right. You can observe the power of informational influence yourself just by standing in the middle of the sidewalk, tilting back your head, and staring at the top of a tall building. Research shows that within just a few minutes, other people will stop and stare too (Milgram, Bickman, & Berkowitz, 1969). Why? They will assume that if you are looking, then there must be something worth looking at.

Contrast the Cannon-Bard theory of emotion with the James-Lange theory of emotion.

You probably think that if you walked into your kitchen right now and saw a bear nosing through the cupboards, you would feel fear, your heart would start to pound, and the muscles in your legs would prepare you for running. Presumably away. But in the late 19th century, William James suggested that the events that produce an emotion might actually happen in the opposite order: First you see the bear, then your heart starts pounding and your leg muscles contract, and then you experience fear, which is nothing more or less than your experience of your physiological response. Psychologist Carl Lange suggested something similar at about the same time; thus this idea is now known as the James-Lange theory of emotion, which asserts that stimuli trigger activity in the autonomic nervous system, which in turn produces an emotional experience in the brain. According to this theory, emotional experience is the consequence—not the cause—of our physiological reactions to objects and events in the world. But James's former student, Walter Cannon, disagreed, and together with his student, Philip Bard, Cannon proposed an alternative to James's theory. The Cannon-Bard theory of emotion suggested that a stimulus simultaneously triggers activity in the autonomic nervous system and emotional experience in the brain (Bard, 1934; Cannon, 1927). Cannon favored his own theory over the James-Lange theory for several reasons. First, the autonomic nervous system reacts too slowly to account for the rapid onset of emotional experience. For example, a blush is an autonomic response to embarrassment that takes 15 to 30 seconds to occur, and yet one can feel embarrassed long before that, so how could the blush be the cause of the feeling? Second, people often have difficulty accurately detecting changes in their own autonomic activity, such as their heart rates. If people cannot detect increases in their heart rates, then how can they experience those increases as an emotion? Third, nonemotional stimuli—such as temperature—can cause the same pattern of autonomic activity that emotional stimuli do, so why don't people feel afraid when they get a fever? Finally, Cannon argued that there simply weren't enough unique patterns of autonomic activity to account for all the unique emotional experiences people have. If many different emotional experiences are associated with the same pattern of autonomic activity, then how could that pattern of activity be the sole determinant of the emotional experience?

Explain how emotions can be mapped along the two dimensions of valence and arousal.

maps don't just show how close things are to one another: They also reveal the dimensions on which those things vary. For example, the map in Figure 8.2 reveals that emotional experiences differ on two dimensions called valence (how positive or negative the experience is) and arousal (how active or passive the experience is).


संबंधित स्टडी सेट्स

Economics Test 3(Monopolistic Competition)

View Set

Sadlier Vocab Level F Unit 6 Synonyms, Antonyms, and Sentences

View Set

Ethos, Logos, Pathos Study Guide

View Set

COMP 1000 Final Exam - Sharepoint Chapter 1

View Set