Module 3

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Correlation

1-5 What does it mean when we say two things are correlated, and what are positive and negative correlations? Describing behavior is a first step toward predicting it. Naturalistic observations and surveys often show us that one trait or behavior is related to another. In such cases, we say the two correlate. A statistical measure (the correlation coefficient) helps us figure how closely two things vary together, and thus how well either one predicts the other. Knowing how much aptitude test scores correlate with school success tells us how well the scores predict school success. Throughout this book, we will often ask how strongly two things are related: For example, how closely related are the personality scores of identical twins? How well do intelligence test scores predict career achievement? How closely is stress related to disease? In such cases, scatterplots can be very revealing. Each dot in a scatterplot represents the values of two variables. The three scatterplots in FIGURE 1.3 illustrate the range of possible correlations from a perfect positive to a perfect negative. (Perfect correlations rarely occur in the real world.) A correlation is positive if two sets of scores, such as height and weight, tend to rise or fall together. Saying that a correlation is "negative" says nothing about its strength. A correlation is negative if two sets of scores relate inversely, one set going up as the other goes down. The study of University of Nevada students discussed earlier found their reports of inner speech correlated negatively (-.36) with their reported psychological distress. Those who reported more inner speech tended to report somewhat less psychological distress. Statistics can help us see what the naked eye sometimes misses. To demonstrate this for yourself, try an imaginary project. You wonder if tall men are more or less easygoing, so you collect two sets of scores: men's heights and men's temperaments. You measure the heights of 20 men, and you have someone else independently assess their temperaments from 0 (extremely calm) to 100 (highly reactive). With all the relevant data right in front of you (TABLE 1.2), can you tell whether the correlation between height and reactive temperament is positive, negative, or close to zero? Comparing the columns in Table 1.2, most people detect very little relationship between height and temperament. In fact, the correlation in this imaginary example is positive, +.63, as we can see if we display the data as a scatterplot (FIGURE 1.4). If we fail to see a relationship when data are presented as systematically as in Table 1.2, how much less likely are we to notice them in everyday life? To see what is right in front of us, we sometimes need statistical illumination. We can easily see evidence of gender discrimination when given statistically summarized information about job level, seniority, performance, gender, and salary. But we often see no discrimination when the same information dribbles in, case by case (Twiss et al., 1989). The point to remember: A correlation coefficient helps us see the world more clearly by revealing the extent to which two things relate.

Naturalistic Observation

A second descriptive method records behavior in natural environments. These naturalistic observations range from watching chimpanzee societies in the jungle, to videotaping and analyzing parent-child interactions in different cultures, to recording racial differences in students' self-seating patterns in a school lunchroom. Naturalistic observation has mostly been "small science"—science that can be done with pen and paper rather than fancy equipment and a big budget (Provine, 2012). But new technologies are enabling "big data" observations. New smart-phone apps and body-worn sensors are expanding naturalistic observation. Using such tools, researchers can track willing volunteers—their location, activities, and opinions—without interference. The billions of people on Facebook, Twitter, and Google, for example, have created a huge new opportunity for big-data naturalistic observation. One research team analyzed all 30.5 billion international Facebook friendships formed over four years, and found that people tended to "friend up." Those from countries with lower economic status were more likely to solicit friendship with those in higher-status countries than vice versa (Landis et al., 2014). Another research team studied the ups and downs of human moods by counting positive and negative words in 504 million Twitter messages from 84 countries (Golder & Macy, 2011). As FIGURE 1.2 shows, people seem happier on weekends, shortly after arising, and in the evenings. (Are late Saturday evenings often a happy time for you, too?) Like the case study, naturalistic observation does not explain behavior. It describes it. Nevertheless, descriptions can be revealing. We once thought, for example, that only humans use tools. Then naturalistic observation revealed that chimpanzees sometimes insert a stick in a termite mound and withdraw it, eating the stick's load of termites. Such unobtrusive naturalistic observations paved the way for later studies of animal thinking, language, and emotion, which further expanded our understanding of our fellow animals. "Observations, made in the natural habitat, helped to show that the societies and behavior of animals are far more complex than previously supposed," chimpanzee observer Jane Goodall noted (1998). Thanks to researchers' observations, we know that chimpanzees and baboons use deception: Psychologists repeatedly saw one young baboon pretending to have been attacked by another as a tactic to get its mother to drive the other baboon away from its food (Whiten & Byrne, 1988). Naturalistic observations also illuminate human behavior. Here are four findings you might enjoy: • A funny finding. We humans laugh 30 times more often in social situations than in solitary situations. (Have you noticed how seldom you laugh when alone?) As we laugh, 17 muscles contort our mouth and squeeze our eyes, and we emit a series of 75-millisecond vowel-like sounds, spaced about one-fifth of a second apart (Provine, 2001). • Sounding out students. What, really, are introductory psychology students saying and doing during their everyday lives? To find out, Matthias Mehl and James Pennebaker (2003) equipped 52 such students from the University of Texas with electronic recorders. For up to four days, the recorders captured 30 seconds of the students' waking hours every 12.5 minutes, thus enabling the researchers to eavesdrop on more than 10,000 half-minute life slices by the end of the study. On what percentage of the slices do you suppose they found the students talking with someone? What percentage captured the students at a computer? The answers: 28 and 9 percent. (What percentage of your waking hours are spent in these activities?) • What's on your mind? To find out what was on the minds of their University of Nevada, Las Vegas, students, Christopher Heavey and Russell Hurlburt (2008) gave them beepers. On a half-dozen occasions, a beep interrupted students' daily activities, signaling them to pull out a notebook and record their inner experience at that moment. When the researchers later coded the reports in categories, they found five common forms of inner experience (TABLE 1.1). • Culture, climate, and the pace of life. Naturalistic observation also enabled Robert Levine and Ara Norenzayan (1999) to compare the pace of life in 31 countries. (Their operational definition of pace of life included walking speed, the speed with which postal clerks completed a simple request, and the accuracy of public clocks.) Their conclusion: Life is fastest paced in Japan and Western Europe, and slower paced in economically less-developed countries. People in colder climates also tend to live at a faster pace (and are more prone to die from heart disease). Naturalistic observation offers interesting snapshots of everyday life, but it does so without controlling for all the factors that may influence behavior. It's one thing to observe the pace of life in various places, but another to understand what makes some people walk faster than others.

The Survey

A survey looks at many cases in less depth. A survey asks people to report their behavior or opinions. Questions about everything from sexual practices to political opinions are put to the public. In recent surveys: • Saturdays and Sundays have been the week's happiest days (confirming what the Twitter researchers found) (Stone et al., 2012). • 1 in 5 people across 22 countries report believing that alien beings have come to Earth and now walk among us disguised as humans (Ipsos, 2010b). • 68 percent of all humans—some 4.6 billion people—say that religion is important in their daily lives (from Gallup World Poll data analyzed by Diener et al., 2011). But asking questions is tricky, and the answers often depend on how questions are worded and respondents are chosen.

The Case Study

Among the oldest research methods, the case study examines one individual or group in depth in the hope of revealing things true of us all. Some examples: Much of our early knowledge about the brain came from case studies of individuals who suffered a particular impairment after damage to a certain brain region. Jean Piaget taught us about children's thinking after carefully observing and questioning only a few children. Studies of only a few chimpanzees have revealed their capacity for understanding and language. Intensive case studies are sometimes very revealing. They show us what can happen, and they often suggest directions for further study. But atypical individual cases may mislead us. Unrepresentative information can lead to mistaken judgments and false conclusions. Indeed, anytime a researcher mentions a finding (Smokers die younger: 95 percent of men over 85 are nonsmokers) someone is sure to offer a contradictory anecdote (Well, I have an uncle who smoked two packs a day and lived to be 89). Dramatic stories and personal experiences (even psychological case examples) command our attention and are easily remembered. Journalists understand that, and often begin their articles with personal stories. Stories move us. But stories can mislead. Which of the following do you find more memorable? (1) "In one study of 1300 dream reports concerning a kidnapped child, only 5 percent correctly envisioned the child as dead" (Murray & Wheeler, 1937). (2) "I know a man who dreamed his sister was in a car accident, and two days later she died in a head-on collision!" Numbers can be numbing, but the plural of anecdote is not evidence. As psychologist Gordon Allport (1954, p. 9) said, "Given a thimbleful of [dramatic] facts we rush to make generalizations as large as a tub." The point to remember: Individual cases can suggest fruitful ideas. What's true of all of us can be glimpsed in any one of us. But to discern the general truths that cover individual cases, we must answer questions with other research methods. RETRIEVAL PRACTICE • We cannot assume that case studies always reveal general principles that apply to all of us. Why not? ansWer: Case studies involve only one individual or group, so we can't know for sure whether the principles observed would apply to a larger population.

Predicting Real Behavior 1-9

Can laboratory experiments illuminate everyday life? When you see or hear about psychological research, do you ever wonder whether people's behavior in the lab will predict their behavior in real life? Does detecting the blink of a faint red light in a dark room say anything useful about flying a plane at night? After viewing a violent, sexually explicit film, does an aroused man's increased willingness to push buttons that he thinks will electrically shock a woman really say anything about whether violent pornography makes a man more likely to abuse a woman? Before you answer, consider: The experimenter intends the laboratory environment to be a simplified reality—one that simulates and controls important features of everyday life. Just as a wind tunnel lets airplane designers re-create airflow forces under controlled conditions, a laboratory experiment lets psychologists re-create psychological forces under controlled conditions. An experiment's purpose is not to re-create the exact behaviors of everyday life but to test theoretical principles (Mook, 1983). In aggression studies, deciding whether to push a button that delivers a shock may not be the same as slapping someone in the face, but the principle is the same. It is the resulting principles—not the specific findings—that help explain everyday behaviors. When psychologists apply laboratory research on aggression to actual violence, they are applying theoretical principles of aggressive behavior, principles they have refined through many experiments. Similarly, it is the principles of the visual system, developed from experiments in artificial settings (such as looking at red lights in the dark), that researchers apply to more complex behaviors such as night flying. And many investigations show that principles derived in the laboratory do typically generalize to the everyday world (Anderson et al., 1999). The point to remember: Psychological science focuses less on particular behaviors than on seeking general principles that help explain many behaviors.

Correlation and Causation 1-7 Why do correlations enable prediction but not cause-effect explanation?

Consider some recent newsworthy correlations: • "Study finds that increased parental support for college results in lower grades" (Jaschik, 2013). • People with mental illness more likely to be smokers, study finds" (Belluck, 2013). • "Teens who play mature-rated, risk-glorifying video games [tend] to become reckless drivers" (Bowen, 2012). What shall we make of these correlations? Do they indicate that students would achieve more if their parents would support them less? That stopping smoking would improve mental health? That abstaining from video games would make reckless teen drivers more responsible? No, because such correlations do not come with built-in cause-effect arrows. But correlations do help us predict. An example: Parenthood is associated with happiness (Nelson et al., 2013, 2014). So, does having children make people happier? Not so fast, say researchers: Parents also are more likely to be married, and married people tend to be happier than the unmarried (Bhargava et al., 2014). Thus, the correlation between parenthood and happiness needn't mean that parenting increases happiness. Another example: Self-esteem correlates negatively with (and therefore predicts) depression. (The lower people's self-esteem, the more they are at risk for depression.) So, does low self-esteem cause depression? If, based on the correlational evidence, you assume that it does, you have much company. A nearly irresistible thinking error is assuming that an association, sometimes presented as a correlation coefficient, proves causation. But no matter how strong the relationship, it does not. As FIGURE 1.5 indicates, we'd get the same negative correlation between self-esteem and depression if depression caused people to be down on themselves, or if some third factor—such as heredity or brain chemistry—caused both low self-esteem and depression. This point is so important—so basic to thinking smarter with psychology—that it merits one more example. A survey of over 12,000 adolescents found that the more teens feel loved by their parents, the less likely they are to behave in unhealthy ways—having early sex, smoking, abusing alcohol and drugs, exhibiting violence (Resnick et al., 1997). "Adults have a powerful effect on their children's behavior right through the high school years," gushed an Associated Press (AP) story reporting the finding. But again, correlations come with no built-in cause-effect arrow. The AP could as well have reported, "Well-behaved teens feel their parents' love and approval; out-ofbounds teens more often think their parents are disapproving jerks." The point to remember (turn the volume up here): Correlation does not prove causation.3 Correlation indicates the possibility of a cause-effect relationship but does not prove such. Remember this principle and you will be wiser as you read and hear news of scientific studies.

Wording Effects

Even subtle changes in the order or wording of questions can have major effects. People are much more approving of "aid to the needy" than of "welfare," of "affirmative action" than of "preferential treatment," of "not allowing" televised cigarette ads and pornography than of "censoring" them, and of "revenue enhancers" than of "taxes." In another survey, adults estimated a 55 percent chance "that I will live to be 85 years old or older," while comparable other adults estimated a 68 percent chance "that I will die at 85 years old or younger" (Payne et al., 2013). Because wording is such a delicate matter, critical thinkers will reflect on how the phrasing of a question might affect people's expressed opinions. Random Sampling In everyday thinking, we tend to generalize from samples we observe, especially vivid cases. Given (a) a statistical summary of a professor's student evaluations and (b) the vivid comments of a biased sample (two irate students), an administrator's impression of the professor may be influenced as much by the two unhappy students as by the many favorable evaluations in the statistical summary. The temptation to ignore the sampling bias and to generalize from a few vivid but unrepresentative cases is nearly irresistible. So how do you obtain a representative sample of, say, the students at your college or university? It's not always possible to survey the whole group you want to study and describe. How could you choose a group that would represent the total student population? Typically, you would seek a random sample, in which every person in the entire group has an equal chance of participating. You might number the names in the general student listing and then use a random number generator to pick your survey participants. (Sending each student a questionnaire wouldn't work because the conscientious people who returned it would not be a random sample.) Large representative samples are better than small ones, but a small representative sample of 100 is better than an unrepresentative sample of 500. Political pollsters sample voters in national election surveys just this way. Using some 1500 randomly sampled people, drawn from all areas of a country, they can provide a remarkably accurate snapshot of the nation's opinions. Without random sampling, large samples—including call-in phone samples and TV or website polls—often merely give misleading results. The point to remember: Before accepting survey findings, think critically. Consider the sample. The best basis for generalizing is from a representative sample. You cannot compensate for an unrepresentative sample by simply adding more people.

Independent and Dependent Variables

Here is an even more potent example: The drug Viagra was approved for use after 21 clinical trials. One trial was an experiment in which researchers randomly assigned 329 men with erectile disorder to either an experimental group (Viagra takers) or a control group (placebo takers given an identical-looking pill). The procedure was double-blind—neither the men nor the person giving them the pills knew what they were receiving. The result: At peak doses, 69 percent of Viagra-assisted attempts at intercourse were successful, compared with 22 percent for men receiving the placebo (Goldstein et al., 1998). For many, Viagra worked. This simple experiment manipulated just one factor: the drug dosage (none versus peak dose). We call this experimental factor the independent variable because we can vary it independently of other factors, such as the men's age, weight, and personality. Other factors, which can potentially influence the results of the experiment, are called confounding variables. Random assignment controls for possible confounding variables. Experiments examine the effect of one or more independent variables on some measurable behavior, called the dependent variable because it can vary depending on what takes place during the experiment. Both variables are given precise operational definitions, which specify the procedures that manipulate the independent variable (the precise drug dosage and timing in this study) or measure the dependent variable (the questions that assessed the men's responses). These definitions answer the "What do you mean?" question with a level of precision that enables others to repeat the study. (See FIGURE 1.6 for the British breast milk experiment's design.) Let's pause to check your understanding using a simple psychology experiment: To test the effect of perceived ethnicity on the availability of a rental house, Adrian Carpusor and William Loges (2006) sent identically worded e-mail inquiries to 1115 Los Angeles-area landlords. The researchers varied the ethnic connotation of the sender's name and tracked the percentage of positive replies (invitations to view the apartment in person). "Patrick McDougall," "Said Al-Rahman," and "Tyrell Jackson" received, respectively, 89 percent, 66 percent, and 56 percent invitations. Experiments can also help us evaluate social programs. Do early childhood education programs boost impoverished children's chances for success? What are the effects of different antismoking campaigns? Do school sex-education programs reduce teen pregnancies? To answer such questions, we can experiment: If an intervention is welcomed but resources are scarce, we could use a lottery to randomly assign some people (or regions) to experience the new program and others to a control condition. If later the two groups differ, the intervention's effect will be supported (Passell, 1993). Let's recap. A variable is anything that can vary (infant nutrition, intelligence, TV exposure—anything within the bounds of what is feasible and ethical). Experiments aim to manipulate an independent variable, measure a dependent variable, and control confounding variables. An experiment has at least two different conditions: an experimental condition and a comparison or control condition. Random assignment works to minimize preexisting differences between the groups before any treatment effects occur. In this way, an experiment tests the effect of at least one independent variable (what we manipulate) on at least one dependent variable (the outcome we measure). TABLE 1.3 compares the features of psychology's research methods.

The Scientific Method

In everyday conversation, we often use theory to mean "mere hunch." Someone might, for example, discount evolution as "only a theory"—as if it were mere speculation. In science, a theory explains behaviors or events by offering ideas that organize what we have observed. By organizing isolated facts, a theory simplifies. By linking facts with deeper principles, a theory offers a useful summary. As we connect the observed dots, a coherent picture emerges. A theory about the effects of sleep on memory, for example, helps us organize countless sleep-related observations into a short list of principles. Imagine that we observe over and over that people with good sleep habits tend to answer questions correctly in class, and they do well at test time. We might therefore theorize that sleep improves memory. So far so good: Our principle neatly summarizes a list of facts about the effects of a good night's sleep on memory. Yet no matter how reasonable a theory may sound—and it does seem reasonable to suggest that sleep could improve memory—we must put it to the test. A good theory produces testable predictions, called hypotheses. Such predictions specify what results (what behaviors or events) would support the theory and what results would disconfirm it. To test our theory about the effects of sleep on memory, our hypothesis might be that when sleep deprived, people will remember less from the day before. To test that hypothesis, we might assess how well people remember course materials they studied before a good night's sleep, or before a shortened night's sleep (FIGURE 1.1). The results will either confirm our theory or lead us to revise or reject it. Our theories can bias our observations. Having theorized that better memory springs from more sleep, we may see what we expect: We may perceive sleepy people's comments as less insightful. The urge to see what we expect is ever-present, both inside and outside the laboratory. According to the bipartisan U.S. Senate Select Committee on Intelligence (2004), preconceived expectations that Iraq had weapons of mass destruction led intelligence analysts to wrongly interpret ambiguous observations as confirming that theory (much as people's views of climate change may influence their interpretation of local weather events). This theory-driven conclusion then led to the preemptive U.S. invasion of Iraq. As a check on their biases, psychologists report their research with precise operational definitions of procedures and concepts. Sleep deprived, for example, may be defined as "X hours less" than the person's natural sleep. Using these carefully worded statements, others can replicate (repeat) the original observations with different participants, materials, and circumstances. If they get similar results, confidence in the finding's reliability grows. The first study of hindsight bias aroused psychologists' curiosity Now, after many successful replications with differing people and questions, we feel sure of the phenomenon's power. Although a "mere replication" of someone else's research seldom makes headline news, recent instances of fraudulent or hard-to-believe findings have sparked calls for more replications (Asendorff et al., 2013). Replication is confirmation. Replication enables scientific self-correction. One Association for Psychological Science journal now devotes a section to replications and 72 researchers are collaborating on a "reproducibility project" that aims to replicate a host of recent findings (Open Science Collaboration, 2012). So, replications are increasing, and so far, most "report similar findings to their original studies" (Makel et al., 2012). In the end, our theory will be useful if it (1) organizes a range of self-reports and observations, and (2) implies predictions that anyone can use to check the theory or to derive practical applications. (Does people's sleep predict their retention?) Eventually, our research may (3) stimulate further research that leads to a revised theory that better organizes and predicts what we know. As we will see next, we can test our hypotheses and refine our theories using descriptive methods (which describe behaviors, often through case studies, surveys, or naturalistic observations), correlational methods (which associate different factors), and experimental methods (which manipulate factors to discover their effects). To think critically about popular psychology claims, we need to understand these methods and know what conclusions they allow.

Description

The starting point of any science is description. In everyday life, we all observe and describe people, often drawing conclusions about why they act as they do Professional psychologists do much the same, though more objectively and systematically, through • case studies (in-depth analyses of individuals or groups). • naturalistic observations (watching and recording the natural behavior of many individuals). • surveys and interviews (asking people questions

Experimentation 1-8

What are the characteristics of experimentation that make it possible to isolate cause and effect? Happy are they, remarked the Roman poet Virgil, "who have been able to perceive the causes of things." How might psychologists perceive causes in correlational studies, such as the correlation between breast feeding and intelligence? Researchers have found that the intelligence scores of children who were breastfed as infants are somewhat higher than the scores of children who were bottle-fed (Angelsen et al., 2001; Mortensen et al., 2002; Quinn et al., 2001). Moreover, the longer they breast-feed, the higher their later IQ scores (Jedrychowski et al., 2012). What do such findings mean? Do smarter mothers have smarter children? (Breastfed children tend to be healthier and higher achieving than other children. But their bottle-fed siblings, born and raised in the same families, tend to be similarly healthy and higher achieving [Colen & Ramey, 2014].) Or, as some researchers believe, do the nutrients of mother's milk also contribute to brain development? To find answers to such questions—to isolate cause and effect—researchers can experiment. Experiments enable researchers to isolate the effects of one or more factors by (1) manipulating the factors of interest and (2) holding constant ("controlling") other factors. To do so, they often create an experimental group, in which people receive the treatment, and a contrasting control group that does not receive the treatment. To minimize any preexisting differences between the two groups, researchers randomly assign people to the two conditions. Random assignment—whether with a random numbers table or flip of the coin—effectively equalizes the two groups. If one-third of the volunteers for an experiment can wiggle their ears, then about one-third of the people in each group will be ear wigglers. So, too, with ages, attitudes, and other characteristics, which will be similar in the experimental and control groups. Thus, if the groups differ at the experiment's end, we can surmise that the treatment had an effect. To experiment with breast feeding, one research team randomly assigned some 17,000 Belarus newborns and their mothers either to a control group given normal pediatric care, or an experimental group that promoted breast-feeding, thus increasing expectant mothers' breast intentions (Kramer et al., 2008). At three months of age, 43 percent of the infants in the experimental group were being exclusively breast-fed, as were 6 percent in the control group. At age 6, when nearly 14,000 of the children were restudied, those who had been in the breast-feeding promotion group had intelligence test scores averaging six points higher than their control condition counterparts. With parental permission, one British research team directly experimented with breast milk. They randomly assigned 424 hospitalized premature infants either to formula feedings or to breast-milk feedings (Lucas et al., 1992). Their finding: For premature infants' developing intelligence, breast was best. On intelligence tests taken at age 8, those nourished with breast milk scored significantly higher than those who were formula-fed. Breast was best. No single experiment is conclusive, of course. But randomly assigning participants to one feeding group or the other effectively eliminated all factors except nutrition. This supported the conclusion that for developing intelligence, breast is indeed best. If test performance changes when we vary infant nutrition, then we infer that nutrition matters. The point to remember: Unlike correlational studies, which uncover naturally occurring relationships, an experiment manipulates a factor to determine its effect. Consider, then, how we might assess therapeutic interventions. Our tendency to seek new remedies when we are ill or emotionally down can produce misleading testimonies. If three days into a cold we start taking vitamin C tablets and find our cold symptoms lessening, we may credit the pills rather than the cold naturally subsiding. In the 1700s, bloodletting seemed effective. People sometimes improved after the treatment; when they didn't, the practitioner inferred the disease was too advanced to be reversed. So, whether or not a remedy is truly effective, enthusiastic users will probably endorse it. To determine its effect, we must control for other factors. And that is precisely how new drugs and new methods of psychological therapy are evaluated (Chapter 16). Investigators randomly assign participants in these studies to research groups. One group receives a treatment (such as a medication). The other group receives a pseudotreatment—an inert placebo (perhaps a pill with no drug in it). The participants are often blind (uninformed) about what treatment, if any, they are receiving. If the study is using a double-blind procedure, neither the participants nor those who administer the drug and collect the data will know which group is receiving the treatment. In double-blind studies, researchers check a treatment's actual effects apart from the participants' and the staff's belief in its healing powers. Just thinking you are getting a treatment can boost your spirits, relax your body, and relieve your symptoms. This placebo effect is well documented in reducing pain, depression, and anxiety (Kirsch, 2010). Athletes have run faster when given a supposed performance-enhancing drug (McClung & Collins, 2007). Drinking decaf coffee has boosted vigor and alertness—for those who thought it had caffeine in it (Dawkins et al., 2011). People have felt better after receiving a phony mood-enhancing drug (Michael et al., 2012). And the more expensive the placebo, the more "real" it seems to us—a fake pill that costs $2.50 works better than one costing 10 cents (Waber et al., 2008). To know how effective a therapy really is, researchers must control for a possible placebo effect.

Regression Toward the Mean 1-6

What is regression toward the mean? Correlations not only make visible the relationships we might otherwise miss, they also restrain our "seeing" nonexistent relationships. When we believe there is a relationship between two things, we are likely to notice and recall instances that confirm our belief. If we believe that dreams are forecasts of actual events, we may notice and recall confirming instances more than disconfirming instances. The result is an illusory correlation. Illusory correlations feed an illusion of control—that chance events are subject to our personal control. Gamblers, remembering their lucky rolls, may come to believe they can influence the roll of the dice by again throwing gently for low numbers and hard for high numbers. The illusion that uncontrollable events correlate with our actions is also fed by a statistical phenomenon called regression toward the mean. Average results are more typical than extreme results. Thus, after an unusual event, things tend to return toward their average level; extraordinary happenings tend to be followed by more ordinary ones. The point may seem obvious, yet we regularly miss it: We sometimes attribute what may be a normal regression (the expected return to normal) to something we have done. Consider two examples: • Students who score much lower or higher on an exam than they usually do are likely, when retested, to return to their average. • Unusual ESP subjects who defy chance when first tested nearly always lose their "psychic powers" when retested (a phenomenon parapsychologists have called the decline effect). Failure to recognize regression is the source of many superstitions and of some ineffective practices as well. When day-to-day behavior has a large element of chance fluctuation, we may notice that others' behavior improves (regresses toward average) after we criticize them for very bad performance, and that it worsens (regresses toward average) after we warmly praise them for an exceptionally fine performance. Ironically, then, regression toward the average can mislead us into feeling rewarded for having criticized others and into feeling punished for having praised them (Tversky & Kahneman, 1974). The point to remember: When a fluctuating behavior returns to normal, there is no need to invent fancy explanations for why it does so. Regression toward the mean is probably at work.

Psychology's Research Ethics 1-10

Why do psychologists study animals, and what ethical guidelines safeguard human and animal research participants? How do human values influence psychology? We have reflected on how a scientific approach can restrain biases. We have seen how case studies, naturalistic observations, and surveys help us describe behavior. We have also noted that correlational studies assess the association between two factors, which indicates how well one thing predicts another. We have examined the logic that underlies experiments, which use control conditions and random assignment of participants to isolate the effects of an independent variable on a dependent variable. Yet, even knowing this much, you may still be approaching psychology with a mixture of curiosity and apprehension. So before we plunge in, let's entertain some common questions about psychology's ethics and values. Protecting Research Participants Studying and protecting animals. Many psychologists study animals because they find them fascinating. They want to understand how different species learn, think, and behave. Psychologists also study animals to learn about people. We humans are not like animals; we are animals, sharing a common biology. Animal experiments have therefore led to treatments for human diseases—insulin for diabetes, vaccines to prevent polio and rabies, transplants to replace defective organs. Humans are complex. But the same processes by which we learn are present in rats, monkeys, and even sea slugs. The simplicity of the sea slug's nervous system is precisely what makes it so revealing of the neural mechanisms of learning.

Sharing such similarities,

should we not respect our animal relatives? The animal protection movement protests the use of animals in psychological, biological, and medical research. "We cannot defend our scientific work with animals on the basis of the similarities between them and ourselves and then defend it morally on the basis of differences," noted Roger Ulrich (1991). Out of this heated debate, two issues emerge. The basic one is whether it is right to place the well-being of humans above that of other animals. In experiments on stress and cancer, is it right that mice get tumors in the hope that people might not? Should some monkeys be exposed to an HIV-like virus in the search for an AIDS vaccine? Is our use and consumption of other animals as natural as the behavior of carnivorous hawks, cats, and whales? The answers to such questions vary by culture. In Gallup surveys in Canada and the United States, about 60 percent of adults have deemed medical testing on animals "morally acceptable." In Britain, only 37 percent have agreed (Mason, 2003). If we give human life first priority, what safeguards should protect the well-being of animals in research? One survey of animal researchers gave an answer. Some 98 percent supported government regulations protecting primates, dogs, and cats, and 74 percent supported regulations providing for the humane care of rats and mice (Plous & Herzog, 2000). Many professional associations and funding agencies already have such guidelines. Most universities screen research proposals, often through Institutional Review Board ethics committees, and laboratories are regulated and inspected. British Psychological Society (BPS) guidelines call for housing animals under reasonably natural living conditions, with companions for social animals (Lea, 2000). American Psychological Association (APA) guidelines state that researchers must ensure the "comfort, health, and humane treatment" of animals and minimize "infection, illness, and pain" (APA, 2002). The European Parliament mandates standards for animal care and housing (Vogel, 2010). Animals have themselves benefited from animal research. One Ohio team of research psychologists measured stress hormone levels in samples of millions of dogs brought each year to animal shelters. They devised handling and stroking methods to reduce stress and ease the dogs' transition to adoptive homes (Tuber et al., 1999). Other studies have helped improve care and management in animals' natural habitats. By revealing our behavioral kinship with animals and the remarkable intelligence of chimpanzees, gorillas, and other animals, experiments have also led to increased empathy and protection for them. At its best, a psychology concerned for humans and sensitive to animals serves the welfare of both. Studying and protecting humans. What about human participants? Does the image of white-coated scientists delivering electric shocks trouble you? Actually, most psychological studies are free of such stress. With people, blinking lights, flashing words, and pleasant social interactions are more common. Moreover, psychology's experiments are mild compared with the stress and humiliation often inflicted by reality TV shows. In one episode of The Bachelor, a man dumped his new fiancée— on camera, at the producers' request—for the woman who earlier had finished second (Collins, 2009). Occasionally, though, researchers do temporarily stress or deceive people, but only when they believe it is essential to a justifiable end, such as understanding and controlling violent behavior or studying mood swings. Some experiments won't work if participants know everything beforehand. (Wanting to be helpful, the participants might try to confirm the researcher's predictions.) The ethics codes of the APA and Britain's BPS urge researchers to (1) obtain potential participants' informed consent before the experiment, Mary (2) protect them from harm and discomfort, (3) keep information about individual participants confidential, and (4) fully debrief people (explain the research afterward). University ethics committees use these guidelines to screen research proposals and safeguard participants' well-being.


Kaugnay na mga set ng pag-aaral

Module 2 - Intro to Sociology (CLEP Exam)

View Set

Intro To Culinary Finals for 1st part of the year

View Set

Managerial Accounting Final (Ch 16 - Ch 27)

View Set

Week 6 Clinical Review Questions

View Set

Case study of a contrasting coastline- The Sundarbans Bangladesh

View Set

How many days are there in a week? 一個星期有幾天?

View Set

Stats Midterm Practice Exam Questions

View Set