Research Methods Final Exam
Causal claims
Explain, change Indicates that one variable caused the other. Starts with a positive, negative, or zero associations "Music lessons enhance IQ" "Texting blamed for rising teen pedestrian injuries" "Shape of glass may influence how fast you drink"
Random Assignment
For example, rolling a die or using a random number generator to place participants into groups.
Why doesn't correlation equal causation? Understand the three causal explanations that can be used to explain an association claim (also see Figure 8.14).
1. Covariance- When two factors have a relationship to each other, and one changes, there should be a change in the other factor as well, whether it is positive or negative. For example, a study shows that a supportive adult figure has a positive Covariance with a child's having good grades in school. Positive Covariance can be shown because when the adult is more supportive, grades go up; when the adult is less supportive, grades go down. 2. Temporal Precedence- Did the cause come before the effect? 3. Internal Validity- Are there confounds?
Be able to recognize and categorize examples of the four goals of psychological research
1. Describe Behavior: What happens? e.g. Solomon Asch and Passover Table- how many people will conform? 2. Predict Behavior: When does it happen? e.g. "People are more likely to conform when the group is large" 3. Explain Behavior: Why does it happen? e.g. "People may agree to things in a group because they want to be accepted" 4. Change behavior: What changes that? e.g. "If people conform because they are afraid of being rejected, then having one person who agrees with you (even if five others don't) might make you less likely to conform because you are still accepted by one)"
Understand the four ways of knowing the world
1. Intuition- your gut feeling 2. Authority: Ask someone who knows e.g. in the researcher themselves (experience) and looking to others who studied similar questions 3. Logic: Using formal principles of reasoning e.g. math or philosophy 4. Observation: Testing or measuring
Understand the 5 APA general ethical principles and 10 specific ethical standards
5 Principles: 1.) Beneficence and nomaleficience- Treat people in ways that benefit them. Do not cause suffering. Conduct research that will benefit society. 2.) Fidelity and responsibility- Establish relationships of trust; accept responsibility for professional behavior 3.) Integrity- Strive to be accurate, truthful, and honest in one's role as researcher, teacher, practitioner 4.) Justice- Strive to treat all groups of people fairly. Sample research participants from the same populations that will benefit from the research. Be aware of biases. 5.) Respect for people's rights and dignity- Recognize that people are autonomous agents. Respect their rights. 10 specific ethical standards: -Resolving ethical issues, competence, human relations, privacy and confidentiality, advertising and other public statements, record keeping and fees, education and training, research and publication, assessment, therapy
Maturation Threat
A change in behavior that emerges more or less spontaneously over time. People adapt to strange environments, children get better at walking or talking. A threat to internal validity.
IRB (Institutional Review Board)
A committee responsible for interpreting ethical principles and ensuring that research using human participants is conducted ethically. An IRB panel in the U.S. includes 5 people and has atleast one who has academic interests outside of science (lawyer or policy maker), one must have no affiliation with the institution. The panel should reflect the diversity of the community. Provides neutral judgment. They are NOT required to observe research in normal educational settings (research methods), research involving surveys or interviews, research involving public data where participants cannot be identified.
Bivariate Correlation
A measure of the relationship between exactly two variables. Describing associations with categorical data. You are measuring in the real world but CANNOT do a Pearson's r statistically. Example: To Measuring deep conversations and happiness: researchers first used a subjective well being scale to measure happiness and then an electronically activated recorder to measure deep conversations
Spurious Association
A relationship between two occurrences that at first may seem to have a relationship but have no logical connection. Example: Amount of ice cream sold and death by drownings when in fact its just due to the seasons
Theory
A set of statements that describes general principles about how things relate to each other. E.g. Harlow's theory that he developed after extensive observations of primate babies and mothers was about the importance of bodily contact in forming attachments. This theory led him to investigate questions but not unrelated questions such as babies food preference
Third Variable Problem
A type of confounding in which the third variable leads to a mistaken causal relationship between two others. For example, cities with a greater number of churches have a higher crime rate. However, more churches do not lead to more crime, but instead the third variable, population, leads to both more churches and more crime.
Manipulated Variable
A variable that is controlled. Example: If you wanted to know how fertilizer affects plant growth, the amount of fertilizer applied is the only variable you would change.
Understand the various sections of an empirical paper and the type of material that goes into each section
Abstract -Acknowledgments/authors notes Introduction -Why the study is being done -Sets up why what is being predicted is being predicted Method -Participants -Design -Procedure -Materials Results -Scores analyzed -Basic averages -tables/figures and meaning -stats used -was the hypothesis supported? Discussion -How well do the authors think the results match predictions? -How do the authors explain any discrepancies? -Do the authors acknowledge flaws? What additional studies do the authors recommend? -What are the author's main conclusions?
Open ended Questions
Allow respondents to answer in any way they like instead of a simple yes or now.
Confound
Alternative explanations. Also means confuse. What to do: do a different study to rule out confounds, do a true experiment, measure variables that might be confounds
Manipulation Check
An extra dependent variable that researchers can insert into an experiment to help them quantify how well an experimental manipulation worked. Example: Mueller and Derek wanted to check to be sure the children believed the praise the experimenter gave them. At a point in the study, they they asked the kids to use two colors to fill in a circle indicating how important both their hard work and their smartness in explaining their performance
Outlier
An extreme score. A single case that stands out far away from the pack.
Control Variable
Any variable that an experimenter holds constant on purpose. For example: In the van keel study, one control variable was the quality of food because it was always the same pasta.
Compare and contrast applied research, basic research, and translational research and recognize examples.
Applied Research- Done with a practical problem in mind and hope that findings will be directly applied to the solution of the problem in a real world context. E.g. a study might ask if a school's district's new method of teaching math is working better than the former one. It might test the efficacy of a treatment for depression in a sample of trauma survivors. Applied researchers might be looking for better ways to identify those who are likely to do well at a particular job. Basic Research- not intended to address specific problem. Goal to enhance general body of knowledge. E.g. wanting to understand the structure of the visual system, motivations of depressed person. Focuses on solid research, however may be applied later on. Translational research- Use of lessons from basic research to develop and test applications to health care psychotherapy, or other forms of treatment or intervention. Dynamic bridge from basic to applied research. E.g. basic research on approach and avoidance goals has also shown these orientations can lead to different kinds of moods. Translational researchers applied this knowledge to depression and developed a therapy that teaches depressed people to reframe their avoidance goals into approach goals.
Justice
Calls for a fair balance between the kinds of people who participate in research and the kinds of people who benefit from it. When the principle of justice is applied, researchers ensure that participants involved in a study are representative of the kinds of people who would also benefit from its results. For example, the principle of justice is violated when researchers study a group of prisoners for tuberculosis because they are convenient. However, it is perfectly acceptable to study prisoners for tuberculosis because it is particularly prevalent in institutions where people live together in a somewhat confined area.
Categorical variable
Categories. A researcher might assign numbers E.g. sex, whose levels are male and female; and species shows levels in a study might be 1. rhesus 2. macaque, 3. chimpanzee, and 4. bonobo. (in no particular order)
Quantitative Variable
Coded with meaningful variables. e.g. height and weight
Compare and contrast convenience sampling, self-selection, purposive sampling, snowball sampling, and quota sampling. Be able to recognize examples of each.
Convenience sample: Sample chosen based on who is easiest to access e.g., college students in Intro Psych are easy to find and study Purposive Sample: A specified type of person is chosen to be studied; importantly, however, these individuals are not chosen at random from the population from which they come e.g., you might be interested in studying LGB people, but instead of sampling from the population of LGB people, you get your sample from the Los Angeles Gay and Lesbian Center Snowball Sample: Participants in a study are asked to refer individuals they know to also participate in the study Self Selected Sample: Participants find the studies themselves and choose to enroll; this is common with internet polls Quota Sample: Subsets of the population of interest are identified and a quota (target number) is set for each group (e.g. 80 Asian Americans, 80 African Americans, 80 Latinos) from there, participants are selected NON randomly (perhaps using one of the above strategies)
Distinguish convergent from discriminant validity
Convergent: Your measure of construct should correlate to a similar measure of construct. For example, one's results on the BDI scale, assessing depression, should correlate to their results on the CES-D, also measuring depression. Discriminant: For example, the BDI, that measures depression, should not correlate strongly with a measure of physical health. Although these constructs may overlap in some way, we cannot expect them to be correlated
Compare and contrast data fabrication and data falsification
Data fabrication: instead of recording what really happened in their study, or not running the study at all, researchers invent data that fit their hypotheses. Data falsification: Researchers influence the study's results, perhaps by selectively deleting observations from a data set or by influencing their research subjects to act in a hypothesized way.
Frequency claims
Describe "1 in 25 U.S> teens attempt suicide" "79% think regifting is acceptable"
Be able to determine when correlations, t-tests, and ANOVAs should be used.
Do students living in dorms feel less lonely than those living off campus? -Independent because we are investigating the difference between two groups. What is the association between loneliness and GPA? -Pearson Correlation (r) because you are seeing if there is a linear association between the two. Tests magnitude and direction of an association between two variables Do SAT scores differ for low-, middle-, and high-income students? -ANOVA because you use it when you are testing the statistical significance between two or more groups. The independent variable has two or more categories. Did intervention work to improve memory from Time 1 to Time 2? -Paired Samples because it is used in "before and after" studies
Informed consent
Each participant learns about the research project, knows the risks and benefits, and decides whether to participate. Researchers are not allowed to mislead people about the study's risks and benefits. Nor are they allowed to coerce
Be able to recognize examples of environmental, stimulus, social, and instructional manipulations.
Environmental manipulation: Manipulation of the physical setting. Example- Halloween candy: Sign saying "please take 1 or 2 pieces" with a mirror present, kids took less candy. Stimulus Manipulation: Vary actual experimental stimuli Example- Does thinking about the elderly make you behave like the elderly? Unscrambling words Social Manipulation: Depends on action of another person in the experimental situation Example- Ball-tossing game. What changes in participants brain when ball is tossed to them? Instructional Manipulation: Part of the cover story or instructions. Example- Memorize vs. form an impression of Donald. Forming an impression helps you remember in the long run.
Respect for persons
From the belmont report. First: individuals participating in research should be treated as autonomous agents: They should be free to make up their own minds about whether they wish to participate in a research study Second: Some people have less autonomy, so they are entitled to special protection when it comes to informed consent. Prisoners, elderly, and children all fall under this category.
Understand all 8 threats to validity presented in lecture.
History threat, maturation threat, regression threat, testing threat, attrition threat- heterogenous homogenous, confounds, internal validity, external validity. Heterogenous attrition- People are dropping out of condition in different amounts. Dropping out of Condition A but not Condition B. When different amounts of people or different types of people drop out of the two conditions of your experiment. Threatens Internal Validity. Example If all the heavy smokers drop out of the intervention group (but not the control group) of your "quit smoking" study, it will look like the intervention worked, even if it didn't. External validity- the extent to which the results of a specific study can be generalized to other people, places, or times. (can't have it unless it is internally valid) Homogeneous Attrition- The people are dropping out of your study in equal amounts. Condition A would still be the same as Condition B. Effects external validity. Example: If all the heavy people drop out of a diet study then you can't generalize the results to heavy people. Internal Validity: Making sure that observed changes in your study are due to your intervention and NOTHING else.
Reliability
How consistent the results of a measure are. With test-retest reliability, the researcher gets consistent scores every time he or she uses the measure. With interrupter reliability, consistent scores are obtained no matter who measures or observes. With internal reliability, a study participant gives a consistent pattern of answers, no matter how the researcher has phrased the question.
Construct Validity
How well did the researcher measure their variables. E.g. "60% of teens text and drive" The researcher should evaluate this by using an internet survey, standing near an intersection and recording, using cell phone records, etc.
External validity
How well the results of a study generalize to or represent, people or contexts beside those in the study itself. If a researcher asked 100 FB friends how happy they were and 44 of them say they struggled to be happy, the researcher cannot argue that 44% of Americans are unhappy. The researcher would have had to ensure those 100 friends were all American.
Be able to determine the direction of a correlation from a scatterplot or correlation coefficient.
If (r) is positive the direction is positive.
Restriction of Range
In a correlational study, If there is not a full range of scores on one of the variables in the association, it can make the correlation appear smaller than it really is. Example: A selective college that admits only students with high SAT scores. Scores on the SAT can range from 600 to 2400 but the selective college only accepts students who score an 1800 or higher, thus making the true range of SAT scores restricted.
Distinguish independent samples t-tests from paired samples (dependent samples) t-tests.
Independent Samples: Give one group of people an active drug and give a different group of people an inactive placebo, then compare the blood pressures between the groups. These two samples would likely be independent because the measurements are from different people. Paired Samples (dependent samples t test): Sample the blood pressures of the same people before and after they receive a dose. The two samples are dependent because they are taken from the same people. The people with the highest blood pressure in the first sample will likely have the highest blood pressure in the second sample.
Representative Sample
It is an unbiased sample. For example, a representative sampling technique for the population of Democrats in Texas would be obtaining a list of all registered Texas Democrats from public records and calling a sample of them through randomized digit dialing.
Be able to recognize and APA citation for a journal article, a book chapter, and a book.
Journal article- Last, F. M., & Last, F. M. (Year Published). Article title. Journal Name, Volume(Issue), pp. Pages. book chapter-Last, F. M. (Year Published). Title of chapter In F. M. Last Editor (Ed.), Title of book/anthology (pp. Pages). Publisher City, State: Publisher. book-Last, F. M. (Year Published) Book. City, State: Publisher.
Theory driven Hypothesis
Made for studies designed to test a theory and look for data that support or falsify a theory. E.g. suppose a theory has been proposed that anxiety causes insomnia. A researcher conducting a study to test this theory may predict that if two groups of participants are compared, one that is put in a provoking situation and one that is put in a relaxing situation, the anxious group will have more problems sleeping
Data driven hypothesis
Making a hypothesis about a study by examining the specific findings of previous studies that are similar and generalizing the findings to their study. E.g. looked at a previous study about anxiety causing insomnia.
Observational or behavioral measures
May record physical traces of behavior such as stress being measured by counting the number of tooth marks on a persons pencil. Give subjects an opportunity to perform a behavior based on... "If they do it" (conformity)-Will subjects give the wrong answer in the Asch line experiment? "How much they do it" (conformity)- How much shock will a subject give to a student? "How long they do it for" (task engagement)- How much time do kids spend playing with markers? "How many times they do it" (Attraction)- How many times do 2 people smile at each other?
Understand each of Mcguire's heuristics for developing hypotheses and be able to recognize examples of each
McGuire uses inductive and deductive techniques for generating hypotheses Inductive ((specific to general): Case Studies: Uses 1 particular person e.g. Sam feels the pressure to fit in when giving opinions about a candidate so he conforms to his peers Paradoxical incidents: Unexpected behavior; things that shouldn't happen but do e.g. Sam says something he does not actually believe Practitioners Rule of Thumb: Study experts in a field and understand how those people achieve what they achieve. e.g. People are more likely to use a product if it is endorsed by a celebrity Serendipity: Came about a hypothesis randomly e.g. Pavlov noticed dogs drew when he came to the door Deductive: Reasoning by Analogy: Similarities between phenomena to understand less well understood phenomena e.g. how minority stress leads to health problems Functional Analysis: Look at what an organism has to do to survive and figure out a hypothesis to apply another organism Hypothetico-deductivo: If A and B, then C. Logical way to bring things together Accounting for conflicting Results: Finding info from one resource and comparing it to another and seeing how they connect e.g. Some people say when you perform infront of an audience you perform better, some people say you perform worse. Accounting for exceptions: Looking at outliers to begin thinking of a whole new research question or hypotheses.
* Internal validity
Means that a study should be able to eliminate any other explanation for the association. Threats: did your IV actually influence your DV? Can a valid causal statement be made about the effects of IV on the DV? E.g. if you claim music lessons enhance IQ, there should be nothing else besides music lessons that enhance IQ
Compare and contrast measured variable from manipulated variable
Measured variable- Variables are simply observed and recorded as they occur naturally. Eg. height, IQ, and blood pressure are typically are measured using scales, rulers or devices. gender and hair colored are also measured. Manipulated Variable- A variable a researcher controls by assigning participants different levels to that variable. Eg. a researcher might give participants 10 mg of medicine, others 20 mg, and others 30 mg. The participant could end up at any of the levels because researchers do the manipulating. Some variables cannot be manipulated (gender).
Compare and contrast mundane realism and experimental realism.
Mundane Realism- Studies that look like the real world Example: Studying gambling behavior by turning lab into "casino" Experimental Realism- studies that are psychologically meaningful to the subject Example: Asch's conformity studies
Double blind study
Neither the participant or the researchers who evaluate them know who is in the treatment group and who is in the comparison group.
How are observer bias and observer effects related to criticisms of observational measures?
Observer Bias: Occurs when observer's expectations influence their interpretation of the participant's behaviors or the outcome of the study. Instead of rating behaviors objectively, observers rate behaviors according to their own expectations or hypotheses. Observer Effects: The observer actually changes the behavior of those they are observing to match the observer expectations.
Hypothesis
Or prediction A way of stating specific outcomes a researcher expects to observe if the theory is accurate. E.g. Harlow's hypothesis related to the way baby monkeys would interact with 2 kinds of mothers. He predicted the babies would spend more time with the cozier mother than the more wiry one. A single theory can lead to many of predictions because a single hypothesis is not sufficient to test the entire theory, only part of it.
Understand order effects, practice effects, and carryover effects. How does counterbalancing account for these effects?
Order effects: Happens when exposure to one level of the independent variable influences responses to the next level of the independent variable. Practice effects: order effects can include practice effects. A long sequence might cause participants to get better at the task or to get tired or bored at the end. Carryover effects: Some form of contamination carries over from one condition to the next. For example, imagine sipping orange juice right after brushing your teeth, the first taste contaminates your experience of the second one. Counterbalancing: Researchers present levels of the independent variables in all orders. This makes it so any order effects should cancel each other out when all data are collected.
Concurrent measures design
Participants are exposed to all levels of an independent variable at roughly the same time. Example: A baby was shown two faces at the same time, a male and a female face. An experimenter recorded which one they looked at the longest. Here the independent variable is the gender of the face, and the baby's looking preference would be the dependent variable.
Repeated Measures design
Participants are exposed to levels at different times Example: Mothers with toddler's oxytocin levels were monitored as they interacted closely with their own toddlers and also a couple days later at the same time, with a different toddler they did not previously know. Found that oxytocin levels were higher when women interacted with toddlers they did not know.
Pretest Posttest design
Participants are randomly assigned to at least two groups and are tested on the key dependent variable twice, before and after exposure to the independent variable. Researchers might use this when they want to see whether random assignment made the groups equal. Example: If you are looking at a depressed person you want to get a baseline measure of depression to see if any therapy or anything you put into practice is effective. Test after to see if therapy worked.
Posttest only design
Participants are randomly assigned to independent variable groups and are tested on the dependent variable once. The most common type of experiment. Example: Take lab rats and randomly assign them. If we are testing for the impact of virus. In the control group, the mice are still in the same compounds but given placebo injections. Go back and check on the dependent variable, did the mice injected with real stuff die? You will be able to see exactly what the dependent variable did.
Forced choice format
People give their opinion by picking the best of two or more options. e.g. in political polls when asked to decide between two candidates
Understand the three patterns that can result from a correlation
Positive correlation- high goes with high and low goes with low. Eg. high scores of shyness go with high ability to read facial expressions. and low with low. Negative correlation- high goes with low and low goes with high. Eg. as the weather gets colder air conditioning costs decrease Zero correlation- No association between variables. Eg. both low and high levels of screen time are associated with all levels of physical activity.
Association claims
Predict The two variables go together but does not claim that mixed-weight status causes arguing. Says things link, associate, correlate, predict "Mixed-weight couples argue more" "Fit kids do better at math and reading" "Bedroom TVs may boost kids' risk of fat and disease."
Compare and contrast primary source from secondary source
Primary source- Contains full research report Secondary source- summarizes info from primary sources and presents basic findings. May be dangerous to rely on secondary sources because it may be incorrect.
Testing Threat
Refers to a change in the participants as a result of taking a test more than once. They may become more practiced with the test or fatigued or bored which could lead to worse results over time. What to do: don't give a pretest, have a control group.
Plagiarism
Representing the ideas or words of one's own without giving appropriate credit.
What is the difference between research and practice?
Research: designates an activity designed to test a hypothesis and permit conclusions to be drawn, or contribute to general knowledge. Practice: interventions that are designed solely to enhance the well-being of an individual patient or client and that have a reasonable expectation of success. Goal is to provide treatment, provide diagnosis, etc.
Beneficence
Researchers must take precautions to protect research participants from harm and ensure their well-being. They have to carefully assess the risks and benefits of their research. They must also consider who will benefit and who will be harmed. Will a community gain something of value from the knowledge this research is producing? Will there be costs to a community if this research is not conducted?
Understand the 3 principles stated in the Belmont report (Respect for Persons, Beneficence, Justice) and how they are applied in research.
Respect for persons: Individuals should be treated as autonomous agents, respect peoples decisions, and persons with diminished autonomy are entitled to protection Beneficence: More than acts of kindness (obligation), do not harm participants, maximize possible benefits, minimize possible harms Justice: People should be treated equally, who should benefit from research and who should bear its burdens?
History Threat
Result from a "historical" or external event that affects most members of the treatment group at the same time as the treatment, making it unclear whether the change in the experimental group is caused by the treatment received or by the historical factor. E.g. 9/11 increasing peoples fear of flying
Physiological measures
Sample some kind of physiological output such as brain activity, hormone levels, or heart rate. Usually require equipment to amplify and record. e.g. heart rate/blood pressure to measure arousal testing cortisol to measure stress testing neural responses by measuring blood flow to different parts of the brain
Understand the pros and cons of the three main types of measurements.
Self Report: Pros- cheap and easy, they have directness, you can ask a lot of questions in a short time, you can ask questions to help you untangle confounds later, questionnaires can be anonymous. Cons- Subjects may treat them casually, subjects may try to make themselves look good, they may not be honest with themselves, they may act based on expectancies or reactance, they may know you are assessing them, responses can be influenced by trivial things. Behavior Measures: Pros- You can often measure a behavior without a subject knowing, my observing behavior you may get a truer response than filling out a questionnaire, behaving may be closer to the construct of interest than responding to a questionnaire. Cons- It is harder to do than questionnaire to make it believable, sometimes it is not possible to measure the behavior you care about (condom use), It takes a lot of time to get a behavioral measure (one subject at a time), You can't measure many things at once. Physiological Measures: Pros- Subjects usually don't have control over these variables, they can be fairly easy to measure (cortisol) Cons- They can scare people, they can be expensive.
Compare and contrast simple random sampling, cluster sampling, multistage sampling, stratified random sampling, oversampling, and systematic sampling. Be able to recognize examples of each.
Simple Random Sample: Identify everyone in population and pull at random (e.g. names in a hat pulled at random) Stratified Random Sample: The researcher selects particular demographic categories on purpose and then randomly selects individuals within each category. Makes sure people are proportionate to population. Oversample: The researcher INTENTIONALLY overrepresent one or more groups so they are weighted to their actual proportion in the population. The people are still pulled at random. Systematic Sample: Not entirely random but systematic; As an example every 10th person that walks out of a store might be sampled Cluster Sample: Randomly sample students in Pennsylvania, start with a list of colleges (clusters) in PA, randomly select five of those colleges, and include every student from those five colleges. Multistage Sample: Randomly sample students in Pennsylvania, start with a list of colleges (clusters) in PA, randomly select five of those colleges, THEN select a random sample of students from within each of those five colleges.
Biased Sample
Some members of the population of interest have a higher chance of being chosen to be in the sample than others. A biased sample is unrepresentative of the population. For example, a biased sampling technique for the population of democrats in Texas would be recruiting people sitting in the front row at the Texas Democratic Convention.
Variable
Something that varies and has two levels. E.g. "Nearly 60% of teens text and drive" texting while driving is the variable and its levels are "whether a person does text while driving or does not text while driving"
Compare and contrast statistical significance and effect size.
Statistical Significance- Helps you learn how likely it is that changes you observe in participants is not due to chance. If the p value is less than the alpha, you can conclude the difference you observed is statistically significant. P values range from 0-1. The lower the P value, the more statistical significance. An alpha of .05 means you are willing to accept there is a 5% chance that you results are due to chance. Effect size- When a difference is statistically significant, it does not mean that it is big, or helpful in decision making. It means you can be confident there is a difference. <.1=trivial effect, .1-.3= small effect, .3-.5=moderate effect, >.5=large effect.
Self Report Measures
Subjects answer questions that are asked on paper, orally, or over the internet. e.g. 1. Measuring sexual behavior and condom use 2. Testing viagra: 70% of surveyed men found it successful
Distinguish systematic error from random error (the book discusses systematic variability and unsystematic variability) .
Systematic error- Error caused by extraneous variables that tend to influence all scores in one condition and to have no effect, or a different effect, on scores in other conditions. Can distract the effect of the IV, threatens internal validity. Goal is to eliminate it. For example, using an electric scale that measures .6 grams too high. Every mass recorded would deviate from the true mass by .6 grams. Random error- Error caused by irrelevant variables who's average influence on the outcome of an experiment is the same in all conditions. Does not affect the internal validity of results, but can hide the effect of IV. Goal is to keep random error as small as possible. Different than systematic, because you can average results. For example, weighing a ring on a scale and getting a different weight each time you measure it.
Deductive Reasoning
Taking general information and making a specific prediction. Used in theory driven hypotheses.
Inductive Reasoning
Taking specific results from another study and using it to make a more general prediction for the research question of interest.
Compare and contrast test-retest reliability, interrater reliability, and internal consistency
Test-retest- E.g. the more times you take an IQ test, your score should increase or stay the same and not decrease. Mostly used when measuring constructs (intelligence, personality) Interrater- E.g. two researchers watching a child at the same time and counting how much he smiles should count the same number. If they do they have interrupter reliability Internal consistency- E.g. people who take Deiner's 5 question well-being scale. The questions on his scale are worded differently but each item is intended to be a measure of the same construct. Therefor, people who agree with the first item on the scale should also agree with the second, etc.
Levels
Texting while driving: A person does text while they drive AND a person does not text while they drive
Validity
The appropriateness of a conclusion or decision, and in general, a valid claim is reasonable, accurate, and justifiable. Psychologists specify which of the four validities a claim is instead of just saying a claim is valid.
Statistical Validity
The extent to which a study's statistical conclusions are accurate and reasonable. How well do the numbers support the claim? E.G. in the study in the report about Americans trying to stay happy, there is a margin of error "+/3 percentage points" Need to be reminded that the number associated with the claim is an estimate.
Independent Variable
The manipulated variable is the independent variable. The name comes from the fact that the researcher has some "independence" in assigning people to different levels of this variable.
Dependent Variable
The measured variable is the dependent variable or outcome variable. How a participant acts on the measured variable DEPENDS on the level of the independent variable.
Directionality Problem
The process of establishing that the cause did come before the effect. Example: Does inflation cause unemployment?
Why might IRBs vary in the decisions they make?
They may very because of the backgrounds of their members.
How do researchers overcome observer bias and observer effects?
They must do more than create a control group. They must create clear rating scales called codebooks to have less bias. They can also use multiple observers. While observing, Observers need to blend in, spend time to make participants comfortable, and measure the behavior's results (e.g. at a museum, look at the wear and tear on the floor to see which areas are most popular) They must conduct a double-blind study or when they cannot do this, a masked design where participants know what group they are in but observers do not.
What is the relationship between reliability and validity?
To say a measure is reliable is only half the story. A measure of circumference can be extremely reliable but may not be valid for its intended use which is a measure of intelligence. Although a measure may be less valid than it is reliable, it cannot be more valid than it is reliable. Reliability has to do with how well a measure correlates with itself. for example, an IQ test can be reliable if it is correlated with itself over time. Validity has to do with how well a measure correlates with something else. Such as an IQ test is valid if it associates with grades or life success.
Descriptive statistics
Used to describe the basic features of data in the study. Different from inferential statistics because you are not trying to reach any conclusions about what the data shows.
Inferential Statistics
Used to interpret or draw general conclusions about a set of observations.
Regression Threat
When a performance is extreme at time 1, the next time that performance is measured (time 2) it is likely to be less extreme.
Attrition Threat
When only a certain kind of participant drops out. if just any camper drops our midweek it may not be a problem, but if the most rambunctious camper leaves mid week, his departure causes an alternative explanation for nikhils results.
Debriefing
When participants are carefully informed about the study's hypotheses after the experiment. Example: In stanley Milgram's shock experiments, participants were debriefed and informed about the hypotheses and introduced to the (unharmed) learner. Despite the careful debriefing process, some participants were dramatically affected by learning that they were willing to harm another human being.
How is social desirability related to criticisms of self report measures?
When survey respondents give answers that make them look better than they really are, the responses decrease the survey's construct validity. Because respondents are shy, worried, or embarrassed about giving an unpopular opinion, they will not tell the truth on a survey or other self-report measure.
Validity
Whether the operationalization is measuring what it is supposed to measure. Comes after reliability. e.g. measuring head circumference as a test of intelligence. Although head circumference measurements may be very reliable, like an inaccurate bathroom scale, they are not as valid as an intelligence test.
Understand within group difference and between group difference.
Within group difference: A measure of how much an individual in your sample tends to change over time. Between group difference: -Differences between people. Example: 1 Family member vs. 2 who are gay No family members vs. 1
Compare and contrast within groups design (within subjects) from independent groups design (between subjects) from
Within-groups design: Every participant is exposed to all manipulations. Don't need as many people. Between groups, Independent groups design: Both groups have different manipulations
Compare and contrast face validity, content validity, and criterion validity
good measure it has face validity. e.g. a measure of head circumference has high face validity as a measure of people's hate size, but had low face validity as a measure of intelligence. Content Validity- e.g. if the conceptual definition of intelligence contains distinct elements such as "reason, plan, solve problems, think abstractly" to have adequate content validity, any operationalization of intelligence should include questions or items to assess each of these components Criterion Validity- The extent to which a measure is related to an outcome e.g. if a company wants to measure IQ as a way to tell how well their employees will make sales and a consultant develops an idea for an aptitude test, employees would need to take the test and then see how it compared to their sales figures. to see if the test has criterion validity.
Semantic differential format
respondents are asked to rate a target object using a numeric scale that is anchored with adjectives. e.g. on rate my professor... "Easy...1,2,3,4,5...Hard"