Research methods 1 Exam #1
barnum question**
"Have you ever eaten fast food?"
What are some of the issues surrounding whether and to what extent psychology is a science?
- Science is: *Empirical: learned through observation *Objective: we think that we are free from bias *Systematic: procedures follow carefully planned steps *Controlled: eliminate other extra factors
What is observer bias and how can blind observers and multiple observers help mitigate the effects of this?
- observers selectively attend to what they expect to hope or see - Blind observers don't know the expectations. Multiple observers can increase objectivity.
What is a confounding variable? How is it a threat to internal validity?
-A variable that is not the focus of the research study, but affects the variables of interest in the study. -
What is a confounding variable? Which type of validity do they hurt if you have them in your study?
-A variable that is not the focus of the research study, but affects the variables of interest in the study. -Hurts internal validity
Scholarly works
-advances knowledge, written by experts for other experts
What is a pilot study and a catch trial? How is each helpful when designing a questionnaire?*
Pilot study: help determine the best way to ask Catch trial: test consistency, honesty, arbitrary answers -They are helpful because multiple questions create reliability
What is the difference between qualitative and quantitative variables?
Qualitative: non-numeric,based on type or characteristic ex. college major Quantitative: numeric, based on amount of variable ex. score on IQ test, number of errors
Difference between a structured and a semi-structured interview
Structured: more objective, data easier to deal with Semi-structured: wider range of responses, can discover new variables
two-group pretest-protest
administering a pretest and a posttest to both groups of participants and compare the change in scores between the groups
snowball sampling
identify one person, then another (from them) *least used
Hawthorne effect
improvement in performance due to being studied
content validity
inclusion of all aspects of a construct by items on a scale or measure
observational research
involves observing and recording the behavior of humans or animals
one-group pretest-posttest
non experimental design in which all participants are tested prior to exposure to a variable of interest and again after exposure
nonparticipant vs. participant
nonparticipant: the researcher or observer is not directly involved in the situation participant: the researcher or observer becomes actively involved in the situation
experimenter expectancy
subtle cues can influence participants
alternate forms reliability
the relationship between scores on two different forms of a scale
convenience sampling
those who are convenient
ordinal
vary in amount (quantitative), but each interval isn't necessarily the same, rankings ex. rank in army, educational degree, runners finishing race
two-group posttest only
we could omit the pretest and compare the posttest scores between two groups
construct validity
whether a measure mirrors the characteristics of a hypothetical construct, can be assessed in multiple ways
secondary source
authors review research but do not report results of original study *review article, scholarly book
What are two biomedical and two psychological experiments that were thought to be conducted unethically and the resulting codes of ethics that came about as results?
Biomedical: 1) Nuremberg code (1947) : in response to physicians conducting medical studies on prisoners in Nazi concentration camps 2) Belmont Report (1978): in response to Tuskegee syphilis studies conducted from 1932-1972. Psychological: 1) Milgram's Shocking Experiment (1963) 2) Zimbardo's Stanford Prison Experiment (1972)
Why does correlation not equal causation?
Causality is a function of a research design, not the type of statistics you use to analyze the design.
Why does random assignment in experiments allow us to be able to control for more potentially confounding factors? Which one(s)?
Change will occur due to the variables and not individual differences or characteristics.
Difference between confidentiality and anonymity and why they are important
Confidentiality: responses kept private, but researcher may be able to link participant w/ responses Anonymity: no one other than participant can link the participant to his/her responses They are important because they both require vigilance on the part of the researcher.
Difference between correlational and experimental designs and the strength of their potential conclusions
Correlational: Does _____ change as a function of ____? -height/weight -drug use/intelligence *Researchers do not attempt to alter variables- they measure them! Experimental: Does _______ influence/(cause) _______? cause: alcohol effect: memory *independent var. *dependent var. * There is nothing inherently independent or dependent about a variable, it depends how it is being used.
Requirements for a good hypothesis:
Hypothesis must be: 1) testable 2) falsifiable (if wrong) 3) precise 4) rational 5) parsimonious
difference between stratified random assignment and simple random assignment with regard to independent groups
In stratified random assignment, randomization occurs after selection of the sample and is designed to even out key characteristics across IV variables. In simple random assignment, it isn't guaranteed that participant characteristics will be evened our across the IV conditions and procedure is often ineffective w/ small heterogeneous samples.
What is the difference between internal validity and external validity and how do they usually relate within the context of an experiment? How is ecological validity related to one or both of these?
Internal: the extent to which you can demonstrate a casual relationship between your IV and DV External: the extent to which the results of a study can be generalized to other samples, settings, or procedures -ecological validity relates to external validity: situation common? natural?
Where does a control group fit in a research design?
It is a group of an independent variable that doesn't get any of the variable. -ex. 2 groups- one doesn't get assigned drug while other group does
What does it mean for a measure to be valid?
It produces accurate results
Why is peer review important for psychology as a science?
It provides some insight into the process and progress of science.
What does it mean for a study (or the relationship between variables) to be reliable?
It refers to the expectation that we will find similar results when we repeat a study.
What is the difference between a literal and a conceptual replication?
Literal replication: conducting the same study with new participants Conceptual replication: conducting a study examining the same patterns or relationships but with different methods
difference between matched random assignment and random assignment to order of conditions within a dependent-groups design
Matched random: each participant only receives one level of the IV to impact DV Random assignment to order of conditions: participants experience all levels of IV to impact DV
What is the difference between an open-ended and a closed-ended response format in questionnaires?
Open-ended: allows respondents to provide their own answers, non-numerical=type of qualitative measure. Closed-ended: typically quantitative measures that provide options that the respondents select from and can vary from dichotomous options.
What are some positives and negatives for conducting interviews rather than questionnaires?
Positives: obtain individual's perspective, allow a researcher to ask follow-up questions and obtain detailed responses Negatives: potential social desirability bias, time-consuming, heightened demand characteristics, have potential for interviewer bias
What is the motivation behind and elements of an informed consent?
Potential participants must be informed of the topic, procedures, risks, and benefits of participation prior to consenting to participate. -Elements: 1) purpose/topic 2) generally what they will do/how long it will take 3)benefits (incentives) and risks (pain, discomfort, confidentiality) 4) how confidentiality will be protected 5) right to decline or withdraw w/o negative effects 6) names/contact info. for researchers
What are some potential problems with offering incentives to participants and the solutions that have become part of our code of research on human subjects?
Potential problems: 1) What if someone is uncomfortable but they still want the thing? 2) What if people lie to get the thing? Solutions: 1) Researchers should carefully consider who their potential participants are and not offer incentives that they would have a difficult time refusing. 2) The incentive should not be contingent on the participant completing the study.
What is an extraneous variable? Why does it matter if it is systematically changing or not?
-An extraneous variable is any factor or variable that causes an effect (or potential affects) other than the variable being studied - It matters because there could be potential confounds
What is debriefing and when is it particularly important?
-Clearing up any misconceptions that the participant might have and addressing any negative effects of the study. -Important when: 1) participation causes distress 2) you are employing deception
Types of IV manipulations
-Environmental: systematic changes to physical or social environment -Scenario: systematic changes to a scenario -Instructional: systematic changes to instructions, educational information, or feedback -Physiological: systematic changes to participants' or subjects' physical functioning
types of methods psychologists use to collect data
Questionnaires: -scales -questions: MC, dichotomous (yes/no), likert (rating), numerical responses, open-ended, close-ended Observational and unobtrusive measures
Tips on how to construct questions for a questionnaire
Questions should: 1) be clear to the participant 2)generate interpretable results 3) reflect typical behavior
What are demand characteristics? How could they affect the results of a questionnaire or interview?
-Extraneous cue that guides or biases a participant's behavior with or without awareness. - social desirability bias, interviewer bias-body language/cues
experience/environment factors that cause unwanted changes to your variable of interest
-History: event or environmental conditions caused the change -Maturation:natural changes over time caused the change -Testing: changes were due to previous exposure to a measure. -Instrumentation: changes were due to inconsistency in the measurement, instrument, observers
participant factors that cause unwanted changes to your variable of interest
-Regression to the mean:changes were b/c extreme scores usually become less extreme -Attrition:change was due to participants withdrawing from the study *can be a problem if people who leave are different from the people who stay -Selection: changes were due to preexisting differences between groups -Selection interactions: changes were due to an interaction between preexisting differences and other
Why doesn't science prove things to be true?
-Science can rule out alternative accounts, but there can always be another. -Science is about strength of evidence
Difference between true experiments and quasi-experiments
-True: investigate cause/effect, people are randomly assigned to conditions -Quasi: try to investigate cause/effect, people are not randomly assigned to conditions ex. sex, age, major, etc
What is restricted range and how can it be avoided when constructing a response scale?
-You cannot predict beyond the range of data you have. -It can be avoided by having 7 points along the scale, having a help and question text, and anchoring w/o extreme words to minimize demand characteristics
leading question
-a question that prompts or encourages the desired answer. -"Do you support the murder of innocent children through partial- birth abortion?"
double-barreled question
-a single question in which asks about more than one issue, but only allows for one answer -"Do you think the Twilight series is poorly written and unoriginal?"
Independent variable and its levels
-a variable that is manipulated in an experiment -Ex. of levels: Reduce high stress Increase high stress
What is a hypothetical construct? Examples?
-abstract concepts that cannot be directly observed or measured. -ex. happiness, love, learning, emotional intelligence
Primary (empirical) source
-authors report results of original research study that they conducted *data
What are some resources available for you to find scholarly articles about psychology?
-conference papers/posters -unpublished manuscripts -scholarly books -theses and dissertations -undergraduate research -abstracts
Types of non probability
-convenience sampling -quota sampling -maximum variation sampling -snowball sampling
Popular works
-entertain or educate people not in the field -ex. Wikipedia, NY Times
Threats to validity
-internal: confounds -external: bad samples, artificial situation
What makes a good sample?
-representative in large population
descriptive research design
-research design in which the primary goal is to describe the variables, but not examine the relationships among variables. -simply describes a sample or population *one variable
correlational design
-research design in which the relationship among two or more variables is examined, but causality cannot be determined. -which behaviors, events, and feelings co-occur with other behaviors, events, and feelings?
Types of probability
-simple random sampling -stratified random sampling -cluster sampling
What does it mean to generalize?
-taking info from one case and applying it to other cases
What is Cronbach's alpha and how is it used in combination with a split-half procedure to determine the internal consistency of a measure?
-test used to assess the internal consistency of a scale by computing the intercorrelations among responses to scale items. - determining level of overlap between responses
What is inter-rater reliability and why is it important?
-the measure of agreement between different raters' scores -It is important because the raters make independent judgements, meaning that they do not know each other's codes or scores.
What is a manipulation check and why is it important?
-the process of verifying that the participants attended to the manipulation -It is important b/c it is possible that the participants did not read carefully enough to pick up on the manipulation.
requirements for causality
1) Does variable A influence/(cause) variable B? 2) Correlation: there must be a relationship between A and B. 3) Sequencing: the change in variable A must come before the change in variable B. 4) Ruling out alternative explanations
Steps for research
1) Identify topic 2) Find, read, evaluate past literature 3) Develop good hypothesis about a variable/a relationship between variables- anything that can take on different values 4) Choose a research design 5)Plan and conduct your study 6) Analyze data 7) Communicate results 8) Repeat
Arguments against deception:
1) May harm participants (embarrassed, uncomfortable) 2) May harm the field b/c makes people suspicious
What types of things do researchers typically record on a code sheet?
1) Narrative about experience/what you observed 2) checklist for key variables 3) duration of behavior 4) timing: task completion, latency 5) rating scale
APA rules for deception:
1) Necessary and justifiable given potential benefits 2) No physical/emotional pain expected 3) Debriefing occurs ASAP
Four measurement scales
1) Nominal 2) Ordinal 3) Interval 4) Ratio
Demand characteristics that are specific to experimental design
1) Reactivity bias: Bias in responses due to being observed 2) Hawthorne effect: Improvement in performance due to being studied 3) single-blind study: make participant unaware of manipulations/predictions 4) experimenter expectancy: subtle cues can influence participants 5) double-blind study: make participant unaware of manipulations/predictions and researcher unaware of condition 6) deception: disguising a manipulation or measure *If you deceive, you must debrief! 7) placebo: a treatment or substance that in and of itself has no therapeutic affect, such as a sugar pill
Process involved in peer review
1) Scientists study something 2) Scientists write about their results 3) Journal editor receives an article and sends it out for peer review. 4) Peer reviewers read the article and provide feedback to the editor. 5) Editor may send reviewer comments to the scientists who may then revise and submit the article for further review. If an article does not maintain sufficiently high scientific standards, it may be rejected at this point 6) If an article finally meets editorial and peer review standards, it is published in a journal.
What are the parts of an APA empirical paper?
1) Title page 2) Abstract 3) Introduction 4)Method 5) Results 6) Discussion 7)References 8) Tables and figures
What types of questions should you avoid putting in a questionnaire and why?
1) Too vague questions: "How long have you studied Spanish?" 2) Double barreled question 3) Barnum question 4) Leading question
Arguments for deception:
1)Essential in creating rare occurrence 2) essential in eliciting genuine responses
What does it mean for a study (or the relationship between variables) to be valid?
The degree to which the result of the study reflects what it is intended to reflect
What does it mean for a measure to be reliable?
A measure is said to have a high reliability if it produces similar results under consistent conditions
test-retest reliability
a measure of the stability of scores on a scale over time
placebo
a treatment or substance that in and of itself has no therapeutic affect, such as a sugar pill
simple random sampling
a type of probability sampling in which every single member of the population has an equal chance of being selected for the sample.
cluster sampling
a type of probability sampling in which groups, or clusters, are randomly selected instead of individuals
stratified random sampling
a type of probability sampling that results in the sample representing key subpopulations based on characteristics such as age, gender, and ethnicity
Dependent variable
a variable that is measured in an experiment and is expected to vary or change based on the independent variable manipulation
difference between independent groups (between-subjects) experiment and dependent groups (within-subjects) experiment
between-subjects: experiment in which each participant experiences only one level of the IV. within-subjects: experiment in which the groups are related, in that participants were matched prior to exposure to the IV or in that the participants experience all levels of the IV.
reactivity bias
bias in responses due to being observed
quota sampling
convenience sampling w/ an eye for quota
covert vs. overt
covert: observations are made w/o the participant's awareness overt: no attempts are made to hide the observation
deception
disguising a manipulation or measure *If you deceive, you must debrief!
single-blind study
make participant unaware of manipulations/predictions
double-blind study
make participant unaware of manipulations/predictions and researcher unaware of condition
naturalistic vs. contrived
naturalistic: observations that occur in natural environments or situations and do not involve interference by anyone involved in the research contrived: the researcher sets up the situation and observes how participants or subjects respond.
divergent validity
negative or no relationship between two scales measuring different constructs
social desirability bias
participants may respond based on how they want to be perceived or what is socially acceptable
concurrent validity
positive correlation between scale scores and a current behavior that is related to the construct assessed by the scale
predictive validity
positive relationship between scale scores and a future behavior that is related to the construct assessed by the scale
convergent validity
positive relationship between two scales measuring the same or similar constructs
nominal
qualitative differences, categories, types, no ordering. ex. hair type, dog or cat person
interval
quantitative, equally spaced, intervals, does not have true zero ex. likert-type answers, IQ, temperature
ratio
quantitative, has true zero ex. length of something, reaction time, # of years in college
experimental design
research design that attempts to determine a casual relationship by manipulating one variable , randomly assigning participants or subjects to different levels of that manipulated variable, and measuring the effect of that manipulation on another variable.
quasi-experimental design
research design that includes a key characteristic of an experiment, namely, the manipulation of a variable. -no random assignment -no causation
maximum variation sampling
researcher seeks full range of extremes in population
Difference between a sample and a population
sample: a subset of the population from which data are collected. population: the group of people, animals, or archives that you are interested in examining.
What are two types of archival research?
secondary data and records/documents
variable
something that varies in that it has at least two possible values
What does it mean to operationalize a variable?
specific way you are measuring hypothetical construct
How do you make sure your dependent measure is sensitive enough? Why does it matter?
• Quantitative > Qualitative • Avoid restricted range -Widen! - Ceiling/floor effects