Research Methods in Psychology MIDTERM STUDYING
Dirty Dozen (page 292-293)
292-293
Reliability
3 types (pg. 120) Test-Retest reliability: The researcher gets consistent results every time he or she uses the measure Interrater Reliability: Consistent results are obtained no matter who measures or observes. Internal reliability: A study participant gives consistent patterns of answers, no matter how the researcher phrases the question or measurement.
Control group
A group that has a level of the independent variable to show a "no treatment" group.
Testing threats
A kind of order effect that occurs with multiple tests. respondents get used to the tests, and their results change. Can be prevented by using only one test (posttest), comparison groups can also help
Hypotheses
A prediction about the answers of a theory.
Interrupted time-series design
A quasi-study that measures people repeatedly on a dependent variable before, during, and after the "interruption" caused by some event.
Personal experience...
...has no comparison group Experience is a confound
One group pretest/posttest
"really bad experiment"
Participant variable
Cannot be assigned or changed, can only be selected (measured).
Construct Validity
Concerns how accurately the variables were measured and how well they were operationalized.
The Present/Present Bias
Concluding our evidence based solely on what is present, not looking at what is absent. good reason for why a comparison group is necessary.
Demand characteristics
Cues that lead participants to give information the experimenters want to see.
Validity
THE 2 SUBJECTIVE WAYS 1) face validity looks at the extent that it is a plausible measure of the variable in question. if it looks like it should be a good measurement, it has face validity. 2) Content validity involves judgement about a measure. the measure must capture all the parts of a defined construct. THE 4 EMPIRICAL WAYS 1) Predictive validity Your measure is correlated with a relevant outcome in the future 2) Concurrent validity Your measurement is correlated with a relevant outcome right now 3) Discriminant validity Your measurement is less strongly associated with measurements of dissimilar constructs 4) Convergent validity Your measurement is strongly associated with measures of a similar construct
Matched-groups design
Takes care of selection effects, occurs when levels of the independent variable are divided up, then randomly assigned.
Elliot and colleagues
Tested the color red and the effects it had on test grades. Red lowered test grades overall.
Crossed factorial design
Testing whether one IV affected another IV
Guidelines for psychologists
The APA ethical principles are on pages 95. The APA ethical principles also contain fidelity and integrity the belmont report only has beneficience, justice, and respect for persons
Where can i find arguments for a study?
The abstract, introduction, and the discussion
Claim
The argument someone is trying to make.
Independent variable
The manipulated variable.
Marginal Means
The means for each IV.
Tuskegee airmen ethical violations
The men in the study were: harmed Not treated respectively targeted even though they were a disadvantaged group
Theory-Data Cycle
The most important cycle in science, which is where data is collected to challenge, support, or update a theory. A theory leads to research questions, which lead to a research design, which leads to a hypothesis, then to data collecting.
Operational Definiton
The process of turning the concept of interest into a measured or manipulated variable.
justice
calls for the fair balance between the people who are participating in the research and the people who benefit from it.
nonequivalent interrupted time series design
combination of the two before this.
Selection effect
occurs in an experiment when the kinds of participants at one level of the independent variable are systematically different from the kinds of participants at the other level of the independent variable.
Standards of research
pages 94-105
Interrogating association claims
pg 191
Null result problems
pg 305
Types of factorial ANOVAs
pg 326-329
Pretest/posttest designs
randomly assigned to independent variable (at least two groups) and are tested on key dependent variable twice. Can have testing dirty dozen.
Translational Research
represents the dynamic bridge between basic and applied research.
Beneficience
researchers must take precautions to protect research participants from harm and to ensure participants' well-being. researchers must consider the risks and potential benefits to the individuals participating researchers must also consider who else may benefit (ex: community)
Cronbach's alpha
returns one number, computed from the average of the inter-term correlations and the number of items on the scale. Used to test for INTERNAL RELIABILITY For self-report, cronbach's aplha should be at least 0.70
Quasi-Experiment
similar to experiments, but not full experimental control. Some variables are not randomly assigned.
A producer is
someone who produces information
a consumer is
someone who pulls from other people's findings
An empiricist means to base one's conclusions on...
systematic observations
Scientists collect there data to...
test, change, or update their theories
Harlow's study
tested the contact comfort theory and tested the cupboard theory on baby monkeys. Contact comfort theory had most support.
Conceptual definition
the conceptual definition of a variable consists of abstract concepts such as "depression" or "debt stress"
Factorial design
two or more independent variables.
Systematic variability
when the results vary in a sort of pattern.
Within groups design
Requires less people, but can have carryover effects.
Independent groups design
Requires more people, also known as between-subjects designs.
What causes biased samples?
Sampling those who are easy to contact Sampling those who you are able to contact Sampling those who invite themselves -Self selection
3 common types of measures
Self-report measure observational measure physiological measure
Variable
Something that is measured, and possibly manipulated. MUST have at least two values
What makes a good theory?
Support from data: multiple studies conducted, with a variety of methods, and address different aspects of theories. it is falsifiable: Must be able to be proven wrong It is parsimonious: All other things being equal, the simplest solution is best. Don't get too simple though. DOES NOT NEED TO BE PROVEN RIGHT: A good theory is based of strong evidence, but that does not mean it is 100% correct.
Carryover effects
aka order effects, practice effects. Occurs with within-groups designs. Is a confound. Leads to practice, fatigue, boredom, or some other contamination to carryover and affect the data.
Placebo group
also known as a placebo control group, is the group that seems like they are exposed to something, but are not.
Manipulation checks
an extra dependent variable that experimenters can insert to see how well the experimental manipulation worked.
Main effect
an overall effect
Overconfidence
1. Makes us trust our reasoning more 2. Makes it hard for us to use the theory-data cycle because people say that it doesn't need testing because they are confident that they know the outcome.
Biases when looking at research
1. Thinking the "easy" way: An example of this is when we accept a conclusion simply because it "makes sense" 2. Thinking what we want to think: An example of this is seeking out specific evidence that supports our theory.
A theory
A statement, or set of statements that describes general principles of how variables relate to one another.
Partial counterbalancing
A type is Latin Square method. Only part of the possible condition orders are represented.
Response sets
A type of shortcut respondents can take when answering a survey Faking bad is not a response set Acquiescence: yea-saying or nay-saying Fence-Sitting
Constant
A variable that could potentially vary, but only has one level in the study in question.
Maturation threats
Adapting or maturing over time. Changing behavior that emerges more or less spontaneously over time. Can be prevented with comparison groups.
Empiricism
Also called empirical method or empirical research, is the approach of collecting data and using it to develop, support, or challenge a theory. This involves using evidence from the senses, or from instruments that enhance our senses. Empirical evidence is also independently verifiable by other observers or scientists.
The pop-up principle
Also known as the availability heuristic, which is when the things that come to mind easily control our thoughts.
Confederate
An actor playing a specific role for the experimenter.
Confound
An alternate explanation for the results of a study (3rd variable).
History threat
An external event occurs to everyone in the treatment group at the same time as the treatment. Can be prevented with a comparison group.
Conditions
Another word for "levels" of a variable.
Basic-Applied Research Cycle
Applied Research: Is done with a practical problem in mind; the researchers hope that their findings will be directly applied to the solution of that problem in a particular real-world context. Basic Research: Is not intended to address a specific, practical problem. The goal of basic research is simply to enhance the general body of knowledge. Applied and basic research questions not only overlap, but often influence each other as well.
Confirmatory hypothesis testing
Asking questions that prove your hypothesis, but not ones that can disprove your hypothesis.
Random assignment
Avoids selection effects, gives everyone an equal chance to be part of the experiment.
Ceiling effects and floor effects
Due to weak manipulations or insensitive measures (too easy or too hard).
Noise
Error variance, unsystematic variability. Can come from measurement error, individual differences, and situation noise
Pilot study
Exposing people to the manipulation, then measuring
Socially desirably responding
Faking good, also can be faking bad
The three claims
Frequency, association, and causal. Anecdotal claims are NOT frequency claims pg's 57-74 LOOK AT THE INSIDE OF THE FRONT COVER
Power
Increased with within-groups designs. The ability of a study to show significant results when something truly is going on in the population
Probabalistic
Inferences of a study are not expected to explain all stories (like the personal experience exceptions), but explain the majority.
Review journal articles
Provide a summary of all the research that has been done in one research area. Meta-analysis: A pooling of results from many studies that summarizes the magnitude of a relationship (effect size). Look at page 39 an 40.
Nonequivalent control groups design
Quasi-experimental study. One treatment group and one comparison group. No random assignment
Empirical Journal articles
Report, for the first time, the results of an empirical research study. contains: Abstract Introduction Method Results Discussion Reference list
Controlled Research
Is better than experience. ex: Catharsis test, bad feedback, punching bag
The journal-to-journalism cycle
Journalism: includes everything but scientific journals. ex: magazines, newspapers, internet sites, etc. Benefits and disadvantages: Many people can hear of the study, but the story can be "telephoned" which can severely change from person to person Is the story important? Is the story accurate? The Mozart Effect: When journalists misrepresent or exaggerate research findings.
All other quasi and "N" stuff is chapter 12
Look at detailed objectives in module 12 for more notes.
Scales of measurement
Nominal/categorical variables: -No number, just categories, qualitative -gender, sex Ordinal scale -race order, class ranking Interval scale -IQ, temperature NOT in Kelvin Ratio scale -Weight, income
Dependent variable
Not manipulated, based on independent variable.
Can happen to any experiment
Observer bias, demand characteristics, and placebo effects. Can be prevented with comparison group, double-blind study, double-blind placebo control.
Biases when doing observational research
Observer bias: a potential threat to construct validity, where the researcher records what they want or expect to see, not what they do see. observer's biases can affect the outcome of the study Observers might see what they expect (only looking at a bit of the study) Observers can affect what they see: -Example, clever Hans Observed might react to being watched -observer effects (reactivity) -Solutions could be to hide (unobtrusive observations, Wait it out (make the participants forget you are there because you do nothing for a long time), Measure the behavior's results (the effects, not the affects).
Instrumentation threats
Occurs when measuring instruments change over time. can be prevented by using a posttest only design, or collect data from every instrument and more often measures.
Nested factorial design
One IV is primary, and the other IV is nested under it.
Construct validity of surveys and polls
Open-ended questions: Allow respondents to give good quality information, but it is often hard to code the answers. Forced-Choice format: People give their opinion by picking the best of two or more answers Likert Scale: Strongly agree, agree, neither agree nor disagree, disagree, strongly disagree Likert-types Semantic differential format Two words with numbers in between them Leading Questions The word choice is key in these questions, bad Double-barreled questions Ask two questions at the same time, bad Double negatives Confusing, and bad Question order Can play a role downstream 146-151
Ethical questions
Participants must be debriefed and have informed consent
Types of operationalization
Physiological, self-report, observational
How to get a representative sample
Probability sampling: Drawing the sample from a random populations Simple random sampling: The most basic form of probability sampling Cluster sampling: Take a group from all over, then small samples from the groups Multistage sampling ALL OTHER SAMPLING pages 168-172
Comparison group
These are essential for studies. Enables the researcher to know what will happen with and without the manipulation. Personal experience often lacks a comparison group.
Counterbalancing
Used for within-groups designs, to avoid order effects.
Attrition threat
When people drop out of the study Can be prevented by omitting the data from the participants that dropped out
Unsystematic variability
When results vary randomly
The Peer-Review Cycle
When scientists want to tell the scientific world about the results of their research, whether basic or applied, they write a paper and submit it to a scientific journal. Articles in a scientific journal are peer-reviewed. Peer-reviewed journals ensure that articles that are published in scientific journals contain novel, well-done studies.
Regression to the mean
When the group regresses toward the mean value for the population. Can be prevented with comparison groups and careful inspection of the results.
Null effect
When there is no significant covariance between the two variables.
Interaction
Whether the effect of the original independent variable depends on another independent variable. Difference in differences
concurrent-measures design
Within-groups, participants are exposed to all levels of the independent variable at roughly the same time and a single attitudinal or behavioral preference is the dependent measure.
Repeated-Measures design
Within-groups. Participants are measured on the dependent variable more than once, that is after exposure to each level of the independent variable.
Data
a set of observations
Design confound
a.k.a. a confound, refers to a second variable that happens to vary systematically along with the intended independent variable and therefore is an alternative explanation of the results.
Treatment groups
a.k.a. experimental groups
Posttest-only design
aka equivalent groups, assigned to independent variable groups, then tested on the dependent variable once.
Institutional review board (IRB)
contains 5 people at a minimum: 1 scientist 1 person with academic interests outside the sciences 1 community member who has no ties to the institution 2 more people if a prison study is conducted, a prisoner must be part of the group An effective IRB will not permit the research that violates people's rights or research that poses unreasonable risk, and it will not permit research that lacks a sound rationale.It should not prevent controversial- but still ethical-research. In the ideal case, the IRB attempts to balance the wlfare of research participants against the researcher's goal of contributing important knowledge to that field.