Research Methods Exam 1: Ch. 1-7
Fence sitting
playing it safe by answering in the middle of the scale, especially when survey items are controversial Trying to look good Fence sitting Use short-cuts
Hypothesis
prediction about a specific outcome based on a theory Hypothesis Data Theory
Review journal article
provide a summary of all the published studies that have been done in one research area Review Journal Article Empirical Journal Article
Survey
often used when people are asked about a consumer product Survey Poll
Being Overconfident
once we decide, we become confident that we are correct; confidence≠correct Confirmation Bias Asking Biased Questions Being Overconfident
Negative association
"People who multitask the most are the worst at it"; high goes with low and low goes with high--high rates of multitasking go with low ability to multitask, and low rates of multitasking go with a high ability to multitask. Zero association Negative association Positive association
Zero association
"Screen time not linked to physical activity in kids"; no association between the variables. On a scatterplot, both low and high levels of screen time are associated with all levels of physical activity.
Type I Error
"false positive" mistakes; when a study might mistakenly conclude from a sample that there is an association between two variables (ex: shyness and reading facial expressions) when there really is no association in the real population. Type I error Type II error
Type II Error
"miss"; a study might mistakenly conclude from a sample that there is no association between two variables (ex: screen time and physical activity) when there really is an association in the full population Type I error Type II error
Self-Report, Observational Measures, Physiological Measures
3 Ways to Measure Variables
1) Respect for persons--informed consent, no coercion 2) Beneficence--protect participants from harm, ensures participants their well-being 3) Justice--balance between participants and those who benefit from the research (the researcher)
3 guiding principles of the Belmont Report--what do each of them entail?
1) observers might see what they expect to see (our favorite sports team might appear to be fouled more than opposing team) 2)observers can affect what they see (Clever Hans horse--the audience was unknowingly telling the horse the correct answers to math problems) 3) the observed might react to being watched
3 problems with Observer Bias
Observational Measures
AKA Behavioral Measure; this measure operationalizes a variable by recording observable behaviors or physical traces of behavior. Ex: happiness=frequency of smiles per day Self-Report Observational Measures Physiological Measures
Thinking the Easy Way, Thinking What We Want to Think
Biases of intuition fall into two basic categories
Confirmation Bias
Cherry picking the evidence; seeking and accepting the evidence that supports what we already think Confirmation Bias Asking Biased Questions Being Overconfident
Validity of Measurement
Does it measure what it is supposed to measure? Ex: The head circumference intelligence test--the measurement is reliable, but it does not measure intelligence as it was thought. Ex: A bathroom scale may report that you weigh 300 lbs every time you step on it--the scale is certainly reliable, but it is not valid.
covariance, temporal precedence, internal validity
Three Criteria for Causation
Face validity
a measure has _______ validity to the extent that it appears to experts to be a plausible measure of the variable in question; if it looks as if it should be a good measure, it has ______ validity. Ex: a measure of head circumference has high ________ validity as a measure of people's hat size, but it has low ________ validity as a measure of intelligence. Ex: rapidity of problem solving, vocabulary size, or creativity would have higher _________ validity than head circumference as measures of intelligence. Predictive and Concurrent validity Content validity Face validity Convergent validity Divergent validity
Data
a set of observations Hypothesis Data Theory
Theory
a statement that describes general principles about how variables relate to one another Hypothesis Data Theory
Debrief
a structured conservation explaining and describing the nature of deception used in the study
Ratio Scale of Measurement
all the properties of an interval scale, but there is an absolute zero point. This scale of measurement applies when the numerals of a quantitative variable have equal intervals and when the value of 0 truly means "nothing". On a test, a researcher might measure how many items people answer correctly. Some people might answer 0 items correctly, and their score of 0 represents truly "nothing correct". Ex: Reaction time, Age, Speedometer Ratio scale of measurement Interval scale of measurement Ordinal scale of measurement
Causal Claims
an association claim merely notes a relationship between two variables, but this claim argues that one of the variables is responsible for changing the other; has two variables (ex: Music lessons enhance IQ, Family meals curb teen eating disorders) Frequency claim Association claim Causal claim
Categorical variables
categories (nominal variables). Ex: sex {1=M, 2=F}, shoes {1=Adidas, 2=Nike, 3=New Balance} Categorical variables Quantitative variables
Cronbach's Alpha
commonly ran to see if measurement scales have internal reliability; the closer this is to 1, the better the scale's reliability.
Frequency Claims
describe a particular rate or degree of a single variable (ex: the frequency of suicide attempts made by teens in the US is 1 in 25, 44% is the rate of adults who are not always happy--these headlines claim how frequent or common something is) Frequency claim Association claim Causal claim
The Good Story
ex) doctor treating stomach ulcers - believed they were caused by stress and excess stomach acid. His treatment was giving the patients antacids and carbonated drinks. His intuition probably delayed the discovery of the real cause The Present Bias The Pop-Up Principle The Good Story
The Belmont Report
in 1976, a group of researchers, physicians, and philosophers gathered for four days to create this document; it was intended to ethically guide research for medicine, psychology, biology, etc. Milgram Obedience Study The Tuskegee Syphilis Study The Belmont Report
Constant
something that could potentially vary but has only one level constant variable
Variable
something that varies; it must have at least two levels constant variable
Claim
the argument someone is trying to make
Content validity
to ensure ____________ validity, a measure must capture all parts of a measured construct; also involves subjective judgment about a measure. Ex: an operational definition of intelligence: ability to reason, comprehension, learn quickly. Our measure must capture all of these ideas. Predictive and Concurrent validity Content validity Face validity Convergent validity Divergent validity
Temporal precedence
to say that one variable has _________________ means it has to come before the other in time (ex: music lessons enhance IQ, stress causes health problems) Internal validity Temporal precedence Covariance
Operationalization
turning a conceptual variable into a measurable variable. Ex: asking people to respond to five items about their life satisfaction, using a 7-point scale.
Thinking the Easy Way
at times intuition is biased because some ideas are simply easier to believe than others. It's easier to believe a "good story" or memorable events; The Good Story, The Present Bias, The Pop-Up Principle
beneficence, fidelity, integrity, justice, respect for persons respect for persons, beneficence, justice
Five general ethical principles. Which 2 included in The Belmont Report?
Construct, Statistical, External, Internal
The Four Validities
Frequency claims, Association claims, Causal claims
The Three Claims include
Positive association
The headline "Shy people are better at reading facial expressions" suggests that, on average, the more shy people are, the better they can read facial expressions--high goes with high and low goes with low. Zero association Negative association Positive association
Priming
This is called _________________. Ex: 1) rank your top 3 favorite sports 2) how interested are you in joining the company softball team? OR 1) how interested are you in joining the company softball team? 2) rank your top 3 favorite sports (reversing the question order like this with hopes that you will put softball because you said you were interested in the first question)
use short-cuts, trying to look good
Two factors that sway accurate responses
Construct, External
Two validities that support frequency claims
Internal reliability
a participant gives a consistent pattern of answers, no matter how the researcher has phrased the question. Ex: On the 5 item well-being scale, the questions are worded differently, but each item is intended to be a measure of the same construct. Therefore, people who agree with the first item on the scale should also agree with the second item (as well with items 3, 4, and 5). Similarly, people who disagree with the first item should also disagree with items 2, 3, 4, and 5. If the pattern is consistent across items in this way, the scale has ____________ reliability. Interrater reliability Test-retest reliability Internal reliability
Manipulated variable
a variable a researcher controls, usually by assigning participants to the different levels of that variable (ex: a researcher might give one participant 10 milligrams of medication, another participant 20 mg, and another 30 mg) Manipulated variable Measured variable
Conceptual variables
abstract concepts such as "shyness" or "intelligence"; sometimes called a construct Operational variables Conceptual variables
Use Short-Cuts
answer all "A" or "strongly agree", answer all "neutral Trying to look good Fence sitting Use short-cuts
Trying to Look Good
answer what will make the researcher like you Trying to look good Fence sitting Use short-cuts
Likert-Type Scale
any scale like the Likert Scale but manipulated in some way. Ex: useless.........extremely helpful
Basic-Applied Research Cycle
applied research is targeting real world problems, while basic research is intended to contribute to the general body of knowledge (not really meant to solve real world problems) Peer-Review Cycle Journal-to-Journalism Cycle Basic-Applied Research Cycle
Plagiarism
appropriation of another person's ideas, methods, processes, or results without giving appropriate credit
Association Claims
argues that one level of a variable is likely to be associated with a particular level of another variable. Variables that are associated are sometimes said to correlate or covary (ex headlines: Shy People Are Better at Reading Facial Expressions, People Who Multitask The Most Are The Worst At It, Screen Time Not Linked to Physical Activity in Kids). Frequency claim Association claim Causal claim
Asking Biased Questions
asking questions that will likely lead to the desired response or expected answers; when we test hypotheses, we tend to ask questions that support the expectations (we seek out confirmation) Confirmation Bias Asking Biased Questions Being Overconfident
External Validity
concerned with how well the results generalize to the population Statistical validity External validity Construct validity Internal validity
Validity
concerns whether the operationalization is measuring what it is supposed to measure Reliability Validity
Interval Scale of Measurement
equal intervals between each unit of that measurement, equal magnitude, but no absolute zero point (a person can get a score of 0, but the 0 does not really mean "nothing"). Ex: IQ test; the distance between IQ scores of 100 and 105 represents the same as the distance between 110 and 115. However, a score of 0 on an IQ test does not mean a person has "no intelligence". Ex: body temperature in degrees Celsius; the intervals between levels are equal, but a temperature of 0 degrees does not mean that a person has "no temperature" Ratio scale of measurement Interval scale of measurement Ordinal scale of measurement
Predictive and Concurrent validity
evaluates whether the measure under consideration is related to a concrete outcome, such as behavior, that it should be related to, according to the theory being tested. Ex: Suppose you work for a company that wants to predict how well job applicants would do as salespeople. For a few years the company has been using IQ to predict sales aptitude. You can give the aptitude test to a current salesperson and compare the results with their current sales, or give the aptitude test to a current salesperson and compare the results with their sales in 3 months--in both cases, we can determine the validity of our new measure, using the correlation coefficient. Predictive and Concurrent validity Content validity Face validity Convergent validity Divergent validity
Confounds
in the real world, there are several possible explanations for any outcome. These alternative explanations are called _______________.
Empiricism
involves using evidence from the senses (sight, touch, hearing) or from instruments that assist the senses (such as thermometers, timers, photographs, weight scales, and questionnaires) as the basis for conclusions; empiricists base their conclusions on systematic observations (data).
The Present Bias
it's more difficult to notice the absence of something rather than the presence of something. ex) Dr. Rush only focused on patients who received the bleeding treatment and recovered The Present Bias The Pop-Up Principle The Good Story
Internal validity
means that a study should be able to rule out all other alternative explanations for an association Internal validity Temporal precedence Covariance
The Tuskegee Syphilis Study
men who did not have syphilis and then contracted it were not told that they had the disease; the men never had any beneficial treatment. Ethical violations in this study: the participants were harmed, the participants were not treated with respect, the researchers were targeted as a disadvantage group. Milgram Obedience Study The Tuskegee Syphilis Study The Belmont Report
Divergent validity
one measure of a construct should not correlate strongly with measures of a different construct. Ex: The results of participants Beck Depression Inventory should not strongly correlate with their results of a scale perceived on physical health; r=0.08--not a strong correlation Predictive and Concurrent validity Content validity Face validity Convergent validity Divergent validity
Measured variable
one whose levels are simply observed and recorded (ex: height, IQ, blood pressure, weight) Manipulated variable Measured variable
Systematic Research
only watching someone and not taking notes Observational research Systematic research
Operational variables
operationalizations; when testing their hypotheses with empirical research, they create operational variables. To operationalize means to turn a concept of interest into a measured or manipulated variable Operational variables Conceptual variables
Open-Ended questions
questions that allow respondents to answer in whatever way they see fit. Ex: what are your comments about this professor? Pro: provide a vast amount of info to the researchers. Con: researchers must categorize and code the responses (time-consuming) Forced-choice questions Open-ended questions Leading questions Double-barreled questions Double-negative questions
Double-Negative Questions
questions that can make survey items unnecessarily complicated; people might simply get confused = poor construct validity; even one negative word can make the question difficult. Ex: "Does it seem possible or impossible to you that the Nazi extermination of the Jews never happened?" Forced-choice questions Open-ended questions Leading questions Double-barreled questions Double-negative questions
Double-Barreled Questions
questions where the wording asks more than one question. Ex: Do you agree that the Second Amendment to our US Constitution guarantees your individual right to own a gun and that the Second Amendment is just as important as your other Constitutional rights? a) suppose b) oppose c) no opinion Forced-choice questions Open-ended questions Leading questions Double-barreled questions Double-negative questions
Leading Questions
questions which can subtly prompt the respondent to answer in a certain way. Ex: Do you think the relations between Blacks and Whites a) are as good as they are going to get? b) or will they eventually get better? Forced-choice questions Open-ended questions Leading questions Double-barreled questions Double-negative questions
Reliability
refers to how consistent the results of a measure are Reliability Validity
Construct Validity
refers to how well a conceptual variable operationalized; to ensure ________ validity, researchers must establish that each variable is measured reliably. ex) we would expect a study on obesity rates to use an accurate scale to weigh participants **when you ask how well a study measured or manipulated a variable, you are interrogating the ________ validity. Statistical validity External validity Construct validity Internal validity
Validity
refers to the appropriateness of a conclusion or decision, and in general, a valid claim is reasonable, accurate, and justifiable. confound validity claim
Empirical journal articles
report, for the first time, the results of an (empirical) research study; contain details about the study's methods, the statistical tests used, and the numerical results of the study Review Journal Article Empirical Journal Article
Applied Research
research that is done with a practical problem in mind; researchers hope that the solution can be practically applied to the problem Basic Research Applied Research
Basic Research
research that is not intended to address or solve a particular practical problem; the goal is to enhance the general overall body of knowledge in a particular field Basic Research Applied Research
Forced-Choice questions
respondents give their opinion by picking the best two or more options; often uses the Likert Scale. Ex: I really like to be the center of attention. It makes me uncomfortable to be the center of attention. Forced-choice questions Open-ended questions Leading questions Double-barreled questions Double-negative questions
Theory-Data Cycle
scientists collect data to test, change, or update their theories. Ex: troubleshooting an electronic device is a form of engaging in this cycle. Basic-Applied Research Cycle Peer-Review Cycle Theory-Data Cycle Journal-to-Journalism Cycle
Peer-Review Cycle
scientists write up the results of their research and publish them in journals for other scientists to view. Once scientists conduct research, and want to tell everyone about it, they will write up the results in a manuscript; the manuscript will be sent to a scientific peer-reviewed journal and it's either accepted or rejected. Peer-Review Cycle Journal-to-Journalism Cycle Basic-Applied Research Cycle
Milgram Obedience Study
series of experiments on obedience to authority; the participant is the "teacher" and a confederate is the "learner"; the teacher is required to punish the learner for incorrect answers in the form of electric shocks; the the teacher increases and delivers shocks to the learner for each incorrect answer; when the teacher refuses to give any more shocks, the authority figure would simply say "continue". Two ethical concerns: 1) the participants were harmed in the form of stress 2) participants have lasting negative effects from the study (Milgram debriefed the participants to help mitigate #2) Milgram Obedience Study The Tuskegee Syphilis Study The Belmont Report
Thinking What We Want to Think
sometimes we want to challenge our preconceived ideas. We simply ____________________________; Confirmation Bias, Asking Biased Questions, Being Overconfident
Likert Scale
strongly disagree.......strongly agree (1-5)
Intuition
the ability to understand something immediately without conscious reasoning
Journal-to-Journalism Cycle
the difference between academic journals and journalism. Academic journals are a medium for scientists to present their research; the general public probably doesn't get a kick out of reading them (bland), and journalism is the news most of us read and is most likely not written by scientists. Peer-Review Cycle Journal-to-Journalism Cycle Basic-Applied Research Cycle
Statistical validity
the extent to which the statistical conclusions are accurate and reasonable; did we use the appropriate analysis? Did we examine the statistics correctly? Type I & Type II Errors. Statistical validity External validity Construct validity Internal validity
Convergent validity
the measure should correlate more strongly with other measures of the same constructs. Ex: The results of a participant's Beck Depression Inventory should strongly correlate with their results of the Center for Epidemiological Studies Depression Scale; r=0.05--this is a huge correlation (these measurements have HIGH ______________ validity) Predictive and Concurrent validity Content validity Face validity Convergent validity Divergent validity
Test-retest reliability
the researcher gets consistent scores every time he or she uses the measure; typically used when measuring constructs that are said to be stable over time. Ex: What if we gave all of you IQ tests right now? Should the scores be similar in a month? Interrater reliability Test-retest reliability Internal reliability
Cupboard Theory
theory of mother-infant attachment; the mother is valuable to a baby mammal because she is a source of food Contact Comfort Theory Cupboard Theory
Contact Comfort Theory
theory that hunger has little to do with why a baby monkey likes to cling to the warm, fuzzy fur of its mother. Instead, babies are attached to their mothers because of the comfort of cozy touch. Contact Comfort Theory Cupboard Theory
Quantitative variables
these variables are coded with meaningful numbers. Ex: height, weight, IQ, etc. Categorical variables Quantitative variables
The Pop-Up Principle
things that easily come to mind tend to guide our thoughts The Present Bias The Pop-Up Principle The Good Story
Self-Report
this measure operationalizes a variable by recording a participant's answers on a questionnaire or in an interview. Ex: If we are studying stress: "How often did you feel stressed or nervous in the last month". Self-Report Observational Measures Physiological Measures
Physiological Measures
this measure operationalizes a variable by recording biological data such as brain activity, heart rate, or hormone levels; usually requires equipment like fMRI, ECG, etc. Ex: "Happier people have more blood flow to this region of the brain" Self-Report Observational Measures Physiological Measures
Interrater reliability
two or more independent observers will come up with the same (or similar) findings; consistent scores are obtained no matter who measures or observes. Ex: You are assigned to count the number of times a participant smiles during a study. There is a reason we have 3 judges in boxing Interrater reliability Test-retest reliability Internal reliability
Poll
used when people are asked about their social or political opinions Survey Poll
Observational Research
when a researcher watches people or animals and systematically records how they behave or what they are doing Observational research Systematic research
Ordinal Scale of Measurement
when quantitative data represent a rank order. Ex: finishers in a race (1st, 2nd, 3rd), 3 vs. 4 star restaurants Ratio scale of measurement Interval scale of measurement Ordinal scale of measurement
Data Falsification
when researchers alter the results of a study or lead their participants in the direction that supports their hypothesis Data fabrication Data falsification
Statistically significant
when something is ________________, it means that the results of our study (the differences we found) are probably not due to chance.
Data Fabrication
where researchers create data in order to support their hypothesis Data fabrication Data falsification
Covariance
where the two variables show an association Internal validity Temporal precedence Covariance