Research Methods - Chapter 6

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

BEHAVIORAL OBSERVATION: (4)

Watch for objective behavior indicators. ******************************** ▪ would you ever cheat or lie on exam , etc. ▪ they tell you the study is over... take 12$ from the envelope but there's 15 in it ▪ they watch to see how much money you take

Choosing Questions

We think that people can probably answer quite a bit of things about their life ... But can they accurately do that?

Observer bias:

observers may record what they want to see or what they expect to see, thus biasing records *systematically* away from what is actually happening.

BOGUS PIPELINE

▪ *assumption:* People will tell the truth if they believe their thoughts are already known, if the experimenter has a 'direct pipeline' into their minds. ▪ *Three Steps:* 1.) *Pretest*: 100s of survey questions, acquire participant responses, get information about the participant 2.) *Demonstrate*: Bring participants in, demonstrate through a rigorous process that you can "see into their mind" 3.) *Inquire* Ask questions about undesirable attitudes. ************************************************* ▪ this was related to polygraph machines ▪ the bogus pipeline technique doesn't even use the polygraph machine ▪ the assumption is that you'll tell the truth if you think people know what the truth is already ▪ Pretest - give 100s of item surveys ... weird questions in another weird surveys - basically collect a bunch of information about you ▪ Demonstrate - they hook you up to something that looks like a polygraph machine - they say they're going to ask you questions and they say "answer with the truth" - they ask you to lie on the second question and someone makes the machine go off - the trick is that they already know what you're going to answer ▪ After you show them that it goes off when they lie... you ask questions that they really don't know the answers to and they will tell the truth

scale 1: LIKERT ("lick-urt") (3)

▪ A scale in which each statement is accompanied by five response options: strongly disagree, disagree, neither agree nor disagree, agree, and strongly agree - Minor variations are often called 'Likert-type' or 'Likert-style' scales - Among the most prolific and prominently used scales in psychology ********************************************* ▪ this is the most commonly used psychological apparatus in the world ▪ it's a very simple scale ▪ they give you a sentence and then say how much do you agree with this type of thing

GOOD OBSERVATIONS (7)

▪ Are both reliable and valid ▪ Have *Multiple Coders* which allows you to assess interrater reliability ▪ Have *Clear and Detailed Codebooks* which: (1)allows observers to make accurate records (2) allows you to mediate disagreeing observations (3)allows others to assess the quality of operationalizations ▪ the best way to get reliable and valid observations is by using a code book and training your observers meticulously ******************************************************** ▪ this is part of a code book about a hockey team? ▪ you have a description and a category and each remark was coded with intensity ▪ you're saying if it was positive, specific or general, and do that for every single comment made ▪ have two other observers do the same thing and measure all of them

DOUBLE-BARRELED questions (4)

▪ Asking two questions at once. ▪ Example: "Wasn't that guitar riff amazing and weren't the song lyrics clever?" ▪ If someone says "yes," what does that mean? Are they referring to the guitar solo, the song lyrics, or both? - if you say yes or no, what does that mean?

DOUBLE NEGATIVES (5)

▪ Avoid negative phrasing except when necessary ▪ Example: - "Was that not the best four-string quartet you have ever heard?" v. - "Was that the best four-string quartet you have ever heard?" ********************************** ▪ the idea is simplicity ▪ people shouldn't be struggling to answer your questions, you should keep them short and keep them simple

WRITING POOR QUESTIONS (2) - What sorts of ways could you "mess up," and write a "poor question?" - What does a "poor question" entail?

▪ Biasing responses... ▪ Having questions that don't relate to what you're trying to measure (internal reliability)

CONSTRUCT VALIDITY (of SURVEYS and POLLS) (3)

▪ Choosing question formats ▪ Writing well-worded questions ▪ Encouraging *accurate* responses

Describing what PEOPLE do - How many students get their bachelors degree? - How many students enrolled in college in 2015 - What were the most popular degrees? (4) - What is the biggest major? (3)

*Jobs* ▪ 1.8 million students get their bachelors ▪ 20.9 mil enrolled in colleges in the US in 2015 ▪ Most popular degrees - business - health professions - psychology --> has been getting more popular ▪ Psychology is the biggest major - are there too many psychology majors? - Psych majors aren't happy w/ options

How much TV do you watch (weekly)? - The importance of formatting (6)

*Scale 1* *Up to 15 minutes* 15 minutes to 30 minutes 30 minutes to 45 minutes 45 minutes to 01 hour 01 hour to 01.25 hours *More than 1.25 hours* *Scale 2* *Up to 07 hours* 07 hours to 08 hours 08 hours to 09 hours 09 hours to 10 hours 10 hours to 11 hours *More than 11 hours* ▪ if you're asking the first scale, they found that people were at the peak ▪ on the second scale, people were answering at the lowest level ▪ *The Importance of Formatting:* People infer information about a question from how it is presented. ▪ People show *endpoint aversion*, use the *midpoint as the average* value, and scale values matter. ******************************************* ▪ How much are you watching per week ▪ this includes snapchat videos, instagram, facebook, amazon, etc. --> *endpoint aversion*: people don't like the endpoints.... they think those are for the extremes

BEHAVIORAL observation - chart for last flashcard

- chart relates to last flashcard

IMPLICIT ASSOCIATION TEST (IAT)

Examine implicit associations between different peoples or objects.

SAYING MORE THAN WE CAN KNOW

people might not know why they did something but they will begin to make up some kind of reason that they think is logical as to why they did what they did

POSTER CHOICE (7)

▪ Group (a): write down reasons for why they liked or disliked each of five posters, or... ▪ Group (b): provided information about their major, why they chose it, their college, and their career plans.(Wilson et al., 1993) ▪ All participants then rated (on a 9-point scale) how much they *liked the five posters* and chose one to take home. ▪ Three weeks later, the participants were contacted. *Those who listed reasons* for liking or disliking the posters were *less satisfied* with their posters. ▪ there's a lot of evidence that writing things out can decrease the emotional force they have on people ▪ sometimes you ask people questions and they don't know the answers to them, but they think they can give you the answers - the people used the information they wrote out ******************************************** ▪ they bring people in the lab, ask them to pick a poster and write down all of the reasons why they liked or disliked the poster OR write about their college or major ▪ when they bring you back 3 weeks later, the people who wrote down why they liked the poster, were less satisfied with the poster they chose

OBSERVERS see what they expect (5)

▪ Hastorf & Cantril (1954): Fans of opposing teams each perceived the other team as having behaved worse on-field and receiving more unfair breaks from refs. ▪ Langer & Abelson (1974): Mental health providers characterized people as more violent & unstable when the people were introduced as patients than as job applicants. ▪ *Solution?* 1. Try to cloud or minimize prior expectations. 2. Try to *very carefully operationalize* variables so that expectations play little role in record-keeping. ****************************************** ▪ They asked things like which team was fouled more and which team receives more breaks ▪ they're looking for the other team to commit a foul or get breaks ▪ both teams answered with the opposing team - people will see what they expect to see ▪ solution; have extremely careful operationalizations... the more observers you have, the less likely they are to have some extremely shared observation

OBSERVATIONAL study Plan an observational study to see who is more likely to hold open a door for another person, men or women. Think about how to maximize your construct validity. Will observers be biased about what they record? Write a 2-3 sentence operational definition of what it means to "hold the door" for someone. Your operational definition should be clear enough that if you asked two friends to use it to code "holding the door" behavior, it would have good reliability & validity. Consider: How will you sample men and women? What will be your population? How is each person equally likely to be included in your sample? Any additional issues?

▪ How are you going to maximize construct validity ▪ how are you going to operationalize holding open the door for another person ▪ if you're doing this out in the real world, you have lost a lot of control - in the lab we can engineer different scenarios... a person falling down on crutches and dropping a lot of papers

BEHAVIORAL observation ▪ How much people talk. Male v. female participants. ▪ Violence at youth hockey/sports games. Positive v. negative comments. ▪ Emotional tone between parents and children after returning from work in the evenings. Father v. mother. *What conclusions might we have drawn if we'd only asked people for self-reports for these situations?* (5)

▪ How much people talk: - Clip-on mic & recorder for intermittent conversation sampling. *Equal number of words spoken for male and female participants*. ▪ Violence at youth hockey/sports games: - *Very few negative comments by parents were actually observed*. ▪ Emotional tone between parents and children after returning from work in the evenings: - *Fathers receive fewer positive, and more distracted greetings*. ▪ we're trying to move past observation into statistical data - people's expectations might be accurate or they might be dead wrong

RELATED: framing effects (5)

▪ Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of the consequences of the programs are as follows: ▪ If Program A is adopted, 200 people will be saved. (72%) ▪ If Program B is adopted, there is 1/3 probability that 600 people will be saved, and 2/3 probability that no people will be saved. (28%) ▪ If Program C is adopted, 400 people will die (22%) ▪ If Program D is adopted, there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die (78%) Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science 211: 453-458.

scale 2: SEMANTIC DIFFERENTIAL (3)

▪ Multi-item scale for rating a target along several dimensions. ▪ you have a bunch of different items to rate about the same content variable - when you think of a likert scale it's your *agreement vs. disagreement* with the statement ********************************************* ▪ the semantic differential is like you have to pick a side - you have to be a little bit careful about the semantic differential because you can like something and hate something at the same time

Scale Formats (2)

▪ Our first line of defense , because they're fast and easy are *scales* - scales are universally initial content that we then would like to pair later with behavioral measures, etc.

SAYING MORE... Sock Example (7)

▪ People will report opinions and the reasons on which those opinions are based, even though they're effectively making it up as they go along. (Nisbett & Wilson, 1977) ▪ For identical nylon stockings: People prefer the right-most one, claimed their preference was based on quality. ▪ Empirically, there may be many forces influencing our behavior. However, we may often be unaware of them. ▪ When asked, we *attribute* reasons to our actions. Those inferred reasons might be biased or simply wrong. *********************************************************** ▪ they would bring people in the lab and give them the option to put on a bunch of different socks ▪ they ask why you picked the socks you picked ▪ they're all the same socks

Question ORDER (3) ▪ unfortunately basically every scale we ever run has more than one items - how do we avoid order ? (5)

▪ Preceding questions can *anchor* perceptions, *frame* interpretations, and encourage *consistent* responding on later questions. ▪ what's happening here is that people derive information from the previous questions ▪ there's an effect called the *anchoring effect* where you can really chance people's perspective by changing the first thing that they see *How do we avoid order?* ▪ We should produce multiple versions and compare results ▪ Identify a set of questions that *does not* frame later questions - we can randomly assign an order - we can throw in some random (filler) questions like "what's your age" - we can give a middle option *********************************************** ▪ Do you favor/oppose affirmative action for women? racial minorities? .... for minorities? for women?

OPEN-ENDED (8)

▪ Questions allow any sort of response ▪ Allows for rich information and unanticipated responses ▪ the option to write whatever you want ▪ Requires careful analysis and effortful (time-intensive) coding of information - *Example:* "What is your favorite genre of music?" - this gives you a great wealth of data BUT unfortunately this is really hard to use - we want to look at and mathematically compare different groups of data - to measure it , it involves meticulously going through and reading every single answer ***************************************************** ▪ fill in the blank ▪ example: our early term class survey

OBSERVERS affect what they see - Expectancy Effects (5)

▪ Rosenthal et al.: *The Pygmalion Effect*: Students & rats both perform better when their teachers/handlers are led to expect exceptional ability. (no other similarity between students and rats is implied) ▪ Clever Hans: A clever horse that noticed the expectations of his handler. He could seemingly "count" (and understand english...), but actually just responded to observers' unintentional cues. ▪ *Solution* - employ a *masked (or "blind") design*, so observers cannot systematically communicate expectations to participants. ▪ ALSO CALLED *EXPECTANCY effects*

ORDER EFFECTS (2)

▪ The order of the questions that you receive matters - preceding questions can change your response on later questions

OBS. V. SELF-REPORT (4)

▪ Without training, observers may often exhibit bias. - ex: the observer expects something and so they look for it .. sometimes they report what they want to see and not what they are really seeing ▪ However, self-reports are, in some sense, just reports from untrained observers. ▪ Poor operationalization can also increase the likelihood of bias for both self-report participants and trained observers. ********************************************** ▪ There a some difficulties that we have to watch out for here, and we've already referred to them in inter-rater reliability ▪ we need to provide training to prevent our observers from being biased ▪ Observer bias is where the observer expects something and so they look for it .. sometimes they report what they want to see and not what they are really seeing

ACQUIESCENCE (4)

▪ Yea-saying and the related, but different, response set of nay-saying - example: when people say "yes" or "strongly agree" to every single item on the scale... not because they truly agree with it, but because they are using yea-saying as a short cut. ▪ Solution: Try reverse-wording some questions. Be careful about double negatives. - they throw in reverse-ordered questions to throw people off ************************************* ▪ you think you figured out what they're asking about so you're just going to give one type of response for every question ▪ if they're asking about athleticism and you think you're athletic so you're going to pick the top level every time - they throw in reverse order to throw you off

LEADING questions (6)

▪ a question that contains or implies its own answer (thus inviting demand effects) ▪ Example: What did you think of that amazing concert? ▪ Solution: Use *neutral* phrasing to capture "true" opinion ▪ Example: Do you think relations between Blacks and Whites will always be a problem? - Or that a solution will eventually be worked out? ▪ Example: Do you think relations between Blacks and Whites are as good as they are going to get? - Or will they eventually get better? - leading questions really relate to something called "framing effects" ************************************************** ▪ questions that want you to answer a certain way ▪ we've done a lot of work about how you ask a question and what that does to how a person answers the question

RESPONSE SET (5)

▪ a shortcut approach to answering related survey questions. - Rather than reporting sincere, thoughtful, and question-specific responses, a person may develop a pattern for responding. - example: answer A,B,C, A,B,C over and over again ▪ *Solution:* Add an attention check. - throw in questions like "if you're reading this question choose C" ... and if you don't choose C , they might be a little skeptical of your data *************************************************** ▪ There are things that we want to avoid with the way that people are filling the scales out ▪ response set would be like , you're just trying to get through it really fast so you're just going to answer A,B,C , A, B, C over and over again ▪ sometimes they throw in questions like "if you're reading this question choose C" ... and if you don't choose C , they might be a little skeptical of your data

GOOD OBSERVATIONS - Figure 6.8

▪ both reliable and valid *Remember K (Kappa) helps us assess interrater reliability* ▪ Figure 6.8 Table 1 from Campos and colleagues' (2009) study of family interactions. The table depicts the degree of interrater reliability for each of the behaviors coded. - This is data of measurement from children's responses of parents returning home

FENCE-SITTING (3)

▪ choosing middle response options to avoid reporting controversial or oversimplified opinions. - ex: choosing the middle point , not wanting to pick a side ▪ Solution: Try removing neutral option from the response set

OTHER ISSUES - Are there any other sort of issues people may face when responding to questions? ▪ Deception? ▪ Inability to answer - introspective issues?

▪ deception - intentionally not answering honestly ▪ Inability to answer- they think they can answer these things but their answers aren't lining up to reality

FORCED-CHOICE (7)

▪ formats limit the type of responses ▪ Requires selecting from a pre-selected set of responses. - *Example*: Narcissistic Personality Inventory (Raskin & Terry, 1988): Two opposing statements are provided and the respondent is asked to choose one. _____I really like to be the center of attention. _____It makes me uncomfortable to be the center of attention. ▪ you have options given to you to choose from - rate on a scale of 1-5, 1-7, 1-9 options, etc. **************************************************** ▪ on the NPI , they might give you two opposing statements and it says choose the one that fits you best - even if you're kind of in the middle, you still have to pick one

Social DESIRABILITY (3)

▪ giving survey responses that make one appear better (or worse) than what is actually the case. ▪ Solution: Guarantee anonymity and ensure that people know that their responses are anonymous. - *Past deception* in research may cause problems. If participants don't trust the researcher, then they may not believe that their responses really are anonymous. *********************************************** ▪ if you feel like you're going to be judged for what you're doing, that can change your behavior - ex: when you're out to eat... if no one orders dessert are you going to order dessert? ▪ This would be like "how good of a driver are you compared to average" ... you're afraid to be judged so you might give less accurate responses in favor of yourself ▪ one solution is to guarantee anonymity ▪ people used deception in the past.... Milgram experiment, Tuskeegee, etc.

How do we get accurate responses ? (9)

▪ honest responses are good but people might actually believe something about themselves and be wrong - ex: when you survey average populations of americans and you ask them how good at driving they are compared to the average driver ... ▪ 70-90% of people think they are above average ▪ it's statistically impossible for that many people to be above average ▪ WHY does everyone think they're above average? - their comparison group is weird - some people might be very good at speeding - some people might be good at parking or something else ▪ they choose how they answer that question... they can choose the thing about driving that they are best at to answer this question

CONSTRUCT VALIDITY (2)

▪ how well the variables are operationalized, measured, manipulated, etc. - this is important to any type of research.

REACTIVITY (4)

▪ participants change their behavior when they know they are being observed. *Solutions?* ▪ Unobtrusive observations (Hide & watch). ▪ *Indirect Measures of Constructs:* Measure the results of a behavior, rather than the behavior itself. ****************************************************** ▪ Observer bias and reactivity ARE NOT the same thing ▪ you might change your behavior because you know you're being watched ▪ they might employ unobtrusive observations ▪ they can measure the results of a behavior and it's called an indirect measure

Life Satisfaction Scale Example (5)

▪ this has 5 items , you rate them 1-7 and you add them together to find out your life satisfaction ▪ why are we using odd numbers (like 1-7)? - we don't normally use 1-3 because there's not a lot of room for movement & it's statistically harder to compare those - when you do odd number of scales, the midpoint is what's in between (or neutral) ▪ we can also use scales as a manipulation ... make them 1- 6 and there's no middle ... they have to pick a side

Shafir, E. (1993). Choosing versus rejecting: Why some options are both better and worse than others. Memory and Cognition, 21, 546-556. (6)

▪ you're serving on the jury of an only-child sole custody case ▪ parent A is very middle of the road ▪ parent B is more extreme ▪ They are asking who you would "award" sole custody of the child to OR who you would "deny" sole custody of the child to - when you're thinking of "awarding" a majority of the jury would award the extreme positives to parent B - at the same time when they ask "deny" they choose parent B because they also have more extreme negatives


Set pelajaran terkait

Physics 1401 Final Exam Review (Chapters 4-18)

View Set

欢乐伙伴 5B 第12课(听写)

View Set

International Business Chapters 1-3

View Set

Chapter 4 - Intermediate Accounting Spiceland 9e

View Set

Multiple choice (speech Ballard)

View Set

Chapter 30: Ecology and PopulationsAssignment

View Set

ATI RN MENTAL HEALTH EXAM 2023 WITH NGN

View Set

Chemistry- Polyatomic Ions & Molecular compounds

View Set