PSYCH 310 FINAL

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Describe the role of the peer-review process in science.

3-4 experts review a journal submission. Virtues and flaws are addressed. RIGOROUS, peer reviewers are kept anonymous so they can freely speak and give honest assessment. The process even continues post-publication and other scientist can cite an article and do further work on the same subject.

Explain why representative samples may be especially important for many frequency claims.

A frequency claim is a claim made towards the whole population, so you want to be careful that that sample isn't biased because of how generalized it is.

Hypothesis

A prediction- the specific outcome the researcher expects to observe in a study of if the theory is accurate

Articulate the difference between mediators, third variables, and moderating variables.

A-->B Moderator: A-->B but when C is present A==>B (stronger or weaker or no relationship) Mediator: A→ because of C; A-->C-->B Mediator: A→ because of C; A-->C-->B Mediator Variable 1 influences variable 2 because of mediator The relationship is not direct Ex. Low income leads to higher divorce rates because making less money is stressful Ex. Ice cream sales lead to increased shark attacks because sharks like ice cream Moderator Variable 1 influences variable 2 differently for different levels of Moderator Relationship's direct Ex. Lower income is more strongly associated with divorce rates in rural than in urban communities Ex. Ice cream sales lead to more shark attack on children but do not influence the number of attacks on adults Third Variable (Confound) Variable 1 and variable 2 are really related to Third Variable, not to each other Relationship is not direct Ex. Lower income and higher divorce rates are both really related to education levels Ex. Increases in ice cream sales and shark attack are really related to the temperature outside

Applied research

Applied research is done with a practical problem in mind; the researchers con- duct their work in a particular real-world context. An applied research study might ask, for example, if a school district's new method of teaching language arts is work- ing better than the former one. It might test the efficacy of a treatment for depres- sion in a sample of trauma survivors. Applied researchers might be looking for better ways to identify those who are likely to do well at a particular job, and so on.

List three ways psychologists typically operationalize variables: self-report, observational, and physiological.

Self-report measure: A method of measuring a variable in which people answer questions about themselves in a questionnaire or interview Observational measure: A method of measuring a variable by recording observable behaviors or physical traces of behaviors. Also called behavioral measure. Physiological measure: A method of measuring a variable by recording biological data

Data

Set of observations- if it matches the theory's hypothesis- strengthens the researcher's confidence in teh theory--- when it doesn't match the hypothesis- indicates that the theory needs to be revised or the research design needs to be improved

Theory

Set of statements that describes general principles about how variables relate to one another

Describe how the procedures for independent-groups and within-groups experiments are different. Explain the pros and cons of each type of design.

Within-groups design: An experimental design in which each participant is presented with all levels of the independent variable. They are compared to themselves. This means the independent variable is the one being manipulated and there will be different levels of it. Also called within-subjects design. Independent-groups design: An experimental design in which different groups of participants are exposed to different levels of the independent variable, such that each participant experiences only one level of the independent variable. Also called between-subjects design or between-groups design.

Give examples of how question wording can change the results of a survey or poll.

Well-Worded Questions Leading Questions: when questions are worded in a non-neutral manner, they can push participants in a certain direction. Double-barreled questions: question that asks two questions at once; participants don't know which one to answer Double Negatives: task working memory and are difficult to answer; some verbs are negative, like withdraw and decrease; every single negative can be difficult Question Order: early question can affect later responses; try different version of the survey with different question orders Questions with overly complex wording Questions/answers that are too long Not all possible answers available Answer options are not mutually exclusive

Frequency Claims

how often something happens, how many there are, (what portion) etc--trying to count it basically

Interrogate the external validity of an association claim by asking to whom the association can generalize.

how was the sample selected? Is it generalizable?

Understand how the correlation coefficient, r, represents strength and direction of a relationship between two quantitative variables.

if its positive or negative shows direction greater = greater strength unless curvilinear then it doesn't explain it Greater than .1=small Greater than .3= medium Greater than .5 =large The correlation coefficient is an effect size because it shows the strength of a relationship

Explain how researchers would apply the principle of justice in selecting research participants.

it means that researchers might first ensure that the participants involved in the study are representatives of the kinds of people who would also benefit from its results. (p.97)

Basic Research

Basic research, in contrast, is not intended to address a specific, practical problem; the goal is to enhance the general body of knowledge. Basic researchers might want to understand the structure of the visual system, the capacity of human memory, the motivations of a depressed person, or the limitations of the infant attachment system. Basic researchers do not just gather facts at random; in fact, the knowledge they generate may be applied to real-world issues later on.

Identify three types of correlations in a longitudinal correlational design: cross-sectional correlations, autocorrelations, and cross-lag correlations.

Cross-sectional correlations: correlations of the two different variables measured at the same time points Autocorrelations: correlation of a variable with itself across time Cross-Lag Correlations: correlation of one variable at Time 1 with the other variable at Time 2

Describe the debriefing process and the goals of debriefing.

Debriefing Happens after the study When a study involves deception, debriefing is ethically required Even non-deception studies should probably involve debriefing

Describe what deception is, and explain when deception is considered permissible in a study.

Deception Sometimes, Psychologist lie to people There are two types of deception: withholding information (omission) or actively lying (commission) Researchers are required to show that they cannot conduct the research without deception

Explain how to increase the construct validity of questions by wording them carefully and by avoiding leading questions, double-barreled questions, and double negatives.

leading question: word questions neutrally or word question differently and look at the results to see if wording affects peoples responses and if different then report results separate for each wording of question double-barreled question: ask each question separately instead of placing in the same sentence double-negative: should be avoided at all costs; ask both ways to test for internal consistency

Explain ways to improve the construct validity of observations by reducing observer bias, observer effects, and target reactivity.

masked- blind study (observers unaware) unobtrusive- observers hide wait it out- participants used to their presence before beginning observation measure behavior- unobtrusive date... traces measure left behind Three things lead it to invalidity: Observer bias, people see things that they expect to see Observer effects, bias changes actual behavior of observant Reactivity- when people are being watched and they know it, they aren't acting as naturally as you would want them to Eliminate bias- don't tell them what the hypothesis is, don't let people know they are being watched

Describe how observational techniques for measurement differ from survey techniques.

more frequency claims -richer/more accurate than polls -more direct -no bias from participants (confounds)

Explain ways to increase the construct validity of questions by preventing respondent shortcuts (such as yea-saying), biases (such as trying to look good), or simple inability to report.

nay-saying: include reverse worded items; slows people down so they actually read the statement fence sitting: forced choice format socially desirability: anonymous survey, filler items, computerized tests

Association

talking about two variables, not cause and effect-- two variables tend to change together- we aren't sure why Bos variables measured Non causal language Some variables are more likely to be associated than others Positive association, negative association, no association, curval linear association (like a little u-- as one goes up the other goes up, then goes back down again) We would prefer to make causal claims all the time but it's not always possible., association claim is valuable-- allows us to make PREDICTIONS

- Interrogate the construct validity of the measured variable in an experiment.

the scale and test: has anyone else used it, is it useful reliable and valid

Empiricism:

using evidence from the sense or from instruments that assist the senses- as basis for conclusions

Statistical validity

was the study any good Type 1 error-- conclude effect when there isn't Type 2 error-- say there isn't effect when there is

Internal validity

was the study designed in a way that eliminated alternate explanations (compounds)--- is this actually an experiment where a variable was manipulated

Identify and interpret data from a multiple-regression table (think beta) and explain, in a sentence, what each coefficient means.

- Beta values and whether it is significant or not -Betas can not be compared to other studies -Relative to other variables

- Identify variables and distinguish a variable from its levels (or values).

- Variables are the core unit of psychological research that implies something that varies, so it must have at least two levels or values. a headline states "shy people are better at reading facial expressions" has two variables: shyness (whose levels are "more shy" and "less shy") and the ability to read facial expressions (whose levels are "more skilled" and "less skilled"). in contrast a research on fathers, gender would not be a variable because it only has one level: every father is a male

Explain the value of pattern and parsimony in research.

-Combine results from a variety of research questions -Lots of experiments done to answer the same question The simplest explanation of a phenomenon will suffice

Consider why journalists might prefer to report single studies rather than parsimonious patterns of data.

-It's easier to understand a single study and easier to write about one study instead of many studies

Identify a mediation hypothesis and sketch a diagram of the hypothesized relationship. Describe the steps for testing a mediation hypothesis.

-Similarly, we know there's an association between having deep conversations and feelings of well-being (see Chapter 8). Researchers might next propose a reason—a mediator of this relationship. One likely mediator could be social ties: Deeper conversations might help build social connections, which in turn can lead to increased wellbeing. They would propose an overall relationship, c, between deep talk and well-being. However, this overall relationship exists only because there are two other relationships: a (between deep talk and social ties) and b (between social ties and well-being). 1. Test for relationship c. Is deep talk associated with well-being? (If it is not, there is no relationship to mediate.) 2. Test for relationship a. Is deep talk associated with the proposed mediator, strength of social ties? Do people who have deeper conversations actually have stronger ties than people who have more shallow conversations? (If social tie strength is the aspect of deep talk that explains why deep talk leads to wellbeing, then, logically, people who have more meaningful conversations must also have stronger social ties.) 3. Test for relationship b. Do people who have stronger social ties have higher levels of well-being? (Again, if social tie strength explains well-being, then, logically, people with stronger social connections must also have higher well-being.) 4. Run a regression test, using both strength of social ties and deep talk as predictor variables to predict well-being, to see whether relationship c goes away. (If social tie strength is the mediator of relationship c, the relationship between deep talk and well-being should drop when social tie strength is controlled for. Here we would be using regression as a tool to show that deep talk was associated with well-being in the first place because social tie strength was responsible.) Because mediation hypotheses are causal claims, a fifth important step establishes temporal precedence: 5. Mediation is definitively established only when the proposed causal variable is measured or manipulated first in a study, followed some time

Define dependent variables and predictor variables in the context of multiple-regression data.

-The first step is to choose the variable they are most interested in understanding or predicting; this is known as the criterion variable, or dependent variable -The rest of the variables measured in a regression analysis are called predictor variables, or independent variables.

Interrogate the construct validity of a manipulated variable in an experiment and explain the role of manipulation checks and theory testing in establishing construct validity.

-manipulation check: an extra dependent variable that researchers can insert into an experiment to help them quantify how well an experimental manipulation worked -pilot study: simple study, using a separate group of participants that is completed before (or after) conducting the primary study to confirm effectiveness of the study's manipulations. theory testing: (can collect additional data), show results support a theory, not just find a result

Empirical inquiry

1. Act as empiricists in investigations- systematically investigate world 2. Test theories through research and in turn revise theories based on resulting data 3. Empirical approach-- test why, when, and for whom effect works-- then make work public

Good theories

1. Supported by data 2. Falsifiable 3. Parsimonious: theories are supposed to be simple- if two theories explain the data equally well, most scientists will opt for the simplet more parsimonious theory Theories dont PROVE anything-- they support or areconsistent/ inconsistent with a hypothesis- also can't be proven/disproven by one study alone -- consider the weight of the evidence for or against it

Interrogate the statistical validity of an association claim, asking about features of the data that might distort the meaning of the correlation coefficient, such as outliers in the scatterplot, effect size, and the possibility of restricted range (for a lower-than-expected correlation). When the correlation coefficient is zero, inspect the scatterplot to see if the relationship is curvilinear.

1. What is the effect size? 2. Is there restriction of range? 3. Outliers? 4. Curvilinear? 5. Correlation statistically significant?

Recognize the points in the APA's Ethical Standard 8 (the standard that most closely applies to research in psychology).

1. institution review boards 2. informed consent 3. deception** 4.debriefing** 5.research misconduct 6. animal research

Define three forms of research misconduct, explaining why each is considered a breach of professional ethics and a violation of the empirical method.

APA Guidelines Regarding Research Misconduct Research misconduct occurs when a researcher fabricates or falsifies data or plagiarizes information or ideas within a research report. The misconduct must be committed intentionally, and the allegation must be proven by sufficient evidence. The definition of misconduct can also extend to breaches of confidentiality and authorship/publication violations Whistleblowers, or those reporting the misconduct, are obligated to act, yet may face serious consequences, such as reduction in research support, ostracism, lawsuits or termination. Institutions should have a procedure in place to investigate and report findings of misconduct to the Office of Research Integrity (ORI) and to protect both whistleblowers and the accused until determination is made Researchers found guilty of misconduct can lose federal funding, be restricted to supervised research or lose their job, so thorough investigation of an allegation is vital. Despite numerous allegations of misconduct, true misconduct is confirmed only about one time in ten thousand allegations.

Explain why many psychologists use animals in research, and describe the role of an institutional animal care and use committee (IACUC) and the Animal Welfare Act in protecting the welfare of animals in research.

Animal Research Strict federal guidelines for animal research IACUC (Institutional Animal Care and Use Committee) Guide for care and laboratory animals Replacement - find alternatives to animals where possible Refinement - modify procedures to reduce animal distress Reduction - use as few animals as possible

Identify verbs that signal causal claims versus association claims.

Association: Uses non-causal language (smoking is related to cancer, intelligence is associated with baldness) Causal: Uses verbs that imply causation (smoking leads to cancer; intelligence increases your odds of being bald)

- Describe at least five ways intuition is biased. (pg. 32)

Being swayed by a good story: accepting a conclusion just because it makes sense Being persuaded by what comes easily to mind: THE AVAILABILITY HEURISTIC, things that pop up easily in our mind tend to guide our thinking, memorable things. Failing to think about what we cannot see: Not seeing the relationship between an event and its outcome. PRESENT/PRESENT BIAS: failure to consider appropriate comparison groups (or what is absent)-- you notice the times when they coincide but not the times when they don't Focusing on the Evidence we like best: The tendency to look only at information that agrees with what we already believe is called the CONFIRMATION BIAS Biased about being biased:We conclude that Biases DO NOT apply to us. BIAS BLIND SPOT: the believe that we are unlikely to fall prey to biases

Describe the difference between biased and unbiased sampling.

Biased sample: A sample in which some members of the population of interest are systematically left out, and therefore the results cannot generalize to the population of interest. Also called an unrepresentative sample. Unbiased sample: A sample in which all members of the population of interest are equally likely to be included (usually through some random method), and therefore the results can generalize to the population of interest. Also called a representative sample.

Explain how multiple-regression designs are conducted.

Can help rule out some third variables, thereby addressing some internal validity concerns. Tests many variables instead of just two. Using this allows researchers to evaluate whether a relationship between two key variables still holds when they control for another variable. "Controlling for" "taking into account..."

Classify measurement scales as categorical or quantitative; further classify quantitative variables as ratio, interval, and ordinal.

Categorical: The kind of scale used is Nominal Quantitative: Has to do with some sort of number measurement Ratio: equal intervals between the units, but there is a meaningful zero (ex. Height. You can't be zero inches tall) Interval: when you have equal intervals between the units, no meaningful zero. Ex. Temperature Ordinal: ranking order

- Describe a variable both as a conceptual variable and as an operational definition.

Conceptual variable: A variable of interest, stated at an abstract, or conversational, level. The condition that the researcher is trying to study. Operational definition: The specific way in which a concept of interest is measured or manipulated as a variable in a study. Also called operationalization or operational definition.

Explain the difference between concurrent-measures and repeated-measures designs.

Concurrent-measures design: An experiment using a within-groups design in which participants are exposed to all the levels of an independent variable at roughly the same time, and a single attitudinal or behavioral preference is the dependent variable. Repeated-measures design: An experiment using a within-groups design in which participants respond to a dependent variable more than once, after exposure to each level of the independent variable.

Describe counterbalancing and explain its role in the internal validity of a within-groups design.

Counterbalancing: In a repeated-measures experiment, presenting the levels of the independent variable to participants in different sequences to control for order effects. See also full counterbalancing, partial counterbalancing. Full counterbalancing: A method of counterbalancing in which all possible condition orders are represented. Partial counterbalancing: A method of counterbalancing in which some, but not all, of the possible condition orders are represented.

Describe the differences between empirical journals and popular journalism; describe the goals of each format and give examples of ways that journalists can write better stories about scientific news.

Empirical Journals: Written by scientists, submitted to a scientific journal, PEER-REVIEWED, Many experts review the submission, Reliable information. Popular Journalism: written by journalists/laye-people, meant to reach general public, easy to access, and understanding their content doesn't require specialized education trying for transparency Is the story important? Is the story accurate? Best thing to do is find the original source and go from there

Consider times when an unrepresentative sample may be appropriate for a frequency claim.

Ex 1: Zappos.com headline, "61% said this shoe felt true to size." You can be pretty sure the people who rated the fit of these shoes are self-selected and there- fore don't represent all the people who own that model. BUT their opinions about the fit of the shoes might generalize. Ex 2: Let's say a driver uses the Waze navigation app to report heavy traffic on a specific highway. This driver is not a randomly selected sample of drivers on that stretch of road; However, traffic is the same for everybody, so even though this driver is a nonrandom sample, her traffic report can probably generalize to the other drivers

Indicate how many variables frequency, association, and causal claims typically involve.

Frequency: one variable Association: at least two variables Causal: at least two variables (at least one variable is manipulated)

Describe why experience usually has no comparison group and usually has confounds.

Has no comparison group because it is one persons experience-- can't compare to anything--comparison group enables us to compare what would happen both with and without the thing we are interest in--to reach a correct conlcusion we need to know all the values Too much is going on at once in life- its impossible to isolate two variables Reesearch is probabalistic- it is not meant to explain all of the cases all of the time Our personal experience is heavily confounded Experience has no comparison group Experience is confounded (many things could have been the cause)

Describe what institutional review boards do and who serves on them.

IRB stands for Institutional Review Board Institutional - each institution where research is conducted has their own Review - the board evaluated potential research projects for ethical issues -a committee responsible for interpreting ethical principles and ensuring the research using human participants is ethical -5 people from specific background. Scientist, academic interest outside science, community member

Interpret different possible outcomes in cross-lag correlations, and make a causal inference suggested by each pattern.

In this example parental overpraise at earlier time periods was significantly correlated with later child narcissism Opposite possibility Narcissism at earlier time periods was significantly correlated with overpraise later Another option both correlations are significant that overpraise at Time 1 predicted narcissism at Time 2 and that narcissism at Time 1 predicted overpraise at Time 2. If that had been the result, it would mean excessive praise and narcissistic tendencies are mutually reinforcing. In other words, there is a cycle in which overpraise leads to narcissism, which leads parents to overpraise, and so on.

Identify an experiment's independent, dependent, and control variables.

Independent variable: In an experiment, a variable that is manipulated. In a multiple-regression analysis, a predictor variable used to explain variance in the criterion variable. See also dependent variable. Dependent variable: In an experiment, the variable that is measured. In a multiple-regression analysis, the single outcome, or criterion variable, the researchers are most interested in understanding or predicting. Also called outcome variable Control variable: In an experiment, a variable that a researcher holds constant on purpose.

Explain how longitudinal designs are conducted.

Longitudinal Designs Longitudinal designs measure the same variables in the sample people at different points in time Using a longitudinal design allows us to establish temporal precedence, because we measure the variables at different time points These are multivariate designs because you have to measure each variable

Explain why experiments are superior to multiple-regression designs for controlling for third variables.

Longitudinal designs establish temporal precedence---MR's are trying to control for third variables but can't do it-- manipulation: can control for everything

Distinguish measured from manipulated variables in a study.

Manipulated variable: A variable in an experiment that a researcher controls, such as by assigning participants to its different levels (values). Measured variable: A variable in a study whose levels (values) are observed and recorded. See also manipulated variable.

Describe matching, explain its role in establishing internal validity, and explain situations in which matching may be preferred to random assignment.

Matched groups: An experimental design technique in which participants who are similar on some measured variable are grouped into sets; the members of each matched set are then randomly assigned to different experimental conditions. Also called matching. Matched groups design works best For small samples When you have a really serious potential confound that you can easily measure, like gender In a match groups design Group participants by levels of some variable (IQ, age, gender) Randomly assign participants from each group to the different conditions

Discriminate between measured and manipulated variables.

Measured variable: A variable in a study whose levels (values) are observed and recorded. Manipulated variable: A variable in an experiment that a researcher controls, such as by assigning participants to its different levels (values).

Identify examples in which writers' and researchers' claims are not justified by the studies they are describing.

Popular press may publish stories that makes claims but are not based on research such as headlines like "I feel I've overcome ADHD '' or "stress ball factory worker attacks boss" such headlines do not report the results of the research. they may report a persons solution to a problem or experts advice but say nothing about frequency of the problem or what research has shown to work. (p.66)

Describe positive, negative, and zero associations.

Positive association: An association in which high levels of one variable go with high levels of the other variable, and low levels of one variable go with low levels of the other variable. Also called positive correlation Negative association: An association in which high levels of one variable go with low levels of the other variable, and vice versa. Also called inverse association, negative correlation. Zero association: A lack of systematic association between two variables. Also called zero correlation.

Identify posttest-only and pretest/posttest designs and explain when researchers might use each one.

Post-test only Designs: An experiment using an independent groups design in which participants are tested on the dependent variable only once. Also called equivalent groups, posttest-only design. Dependent variable measured only once Pretest/Posttest Designs: An experiment using an independent-groups design in which participants are tested on the key dependent variable twice: once before and once after exposure to the independent variable. Dependent variable measured twice (before and after)

Describe four techniques of nonrandom sampling: purposive, convenience, quota, and snowball sampling.

Purposive: picking specific participants to use Convenience: when you reach out to whoever will reply (ex.Sona credit) Quota: filling a particular demographic Snowball sampling: When you have a participant in one study choose 2 people, and then you have those two people will choose two more people

Describe the different ways questions can be worded: open-ended, forced choice, and using rating scales.

Question Formats Open-ended questions Advantage: they can put what they want to put Disadvantage: they might not put what you need, and someone has to code Forced Choice Advantage: easy to code, easy to respond, faster response Disadvantage: maybe someone's response doesn't fit Likert Scale Semantic Differential: scale (anchor points on each end - two words that are opposite)

Explain how question order can change the meaning (and validity) of a question.

Question Order: early question can affect later responses; try different version of the survey with different question orders

Describe random assignment and explain its role in establishing internal validity.

Random assignment: The use of a random method (e.g., flipping a coin) to assign participants into different experimental groups By randomly assigning participants to the different conditions turns systematic variability into unsystematic variability, thereby eliminating confounds

Explain the advantages of research over intuition and experience.

Researchers—scientific reasoners—create comparison groups and look at all the data. Rather than base their theories on hunches, researchers dig deeper and generate data through rigorous studies.

Describe how scatterplots, r, and known groups can be used to evaluate predictive, concurrent, convergent, and discriminant validity.

Scatter plots: useful for assessing agreement between two administrations of the same measurement good for re-test and interrater reliably first step to determined reliably correlation coefficient(r): signal a number to say how close the dots are on a scatter plot drawn throughout them

Explain five techniques for random sampling: simple random, cluster, systematic, stratified random sampling, and oversampling.

Simple random sampling- every member of the population has an equal probability of being selected for the sample. Ex. drawing out of a hat Cluster Sampling- selection is at the group level, rather than at the individual level ex: If a researcher wanted to randomly sample high school students in the state of Pennsylvania, for example, he could randomly select 100 of those high schools (clusters), and then include every student from each of those 100 schools in the sample. Multi-stage- two random samples are selected: a random sample of clusters, then a random sample of people within those clusters. Ex: the researcher starts with a list of high schools (clusters) in the state and selects a random 100 of those schools. Then, instead of selecting all students at each school, the researcher selects a random sample of students from each of the 100 selected schools. Systematic- Every nth item in the target population is selected. ex: using a computer or a random number table, the researcher starts by selecting two random numbers—say, 4 and 7. If the population of interest is a roomful of students, the researcher would start with the fourth person in the room and then count off, choosing every seventh person until the sample was the desired size Stratified Take one person from each demographic cluster to represent all possible demographic clusters Oversampling- a variation of stratified random sampling in which the researcher intentionally over represents one or more groups. Trying to get more from one group than the others

Operational definition

Take something abstract and make it measurable--description of how we make a variable measurable

Name and define the three ethical principles of the Belmont Report. Pg. 99

The Belmont Report (1979) Respect for Persons Research participants must be treated as autonomous agents, meaning that their right to choose what they will do should be respected Participants must usually give informed consent. They must be given enough information about the research and its risks and benefits to make their decision Participants must not be coerced (implication of negative consequences for not participating) Researchers must not exert undue influence (offer and incentive too big to refuse) People with reduced autonomy (limited freedom/ability to choose) must be protected Children Prisoners Mentally Ill INFORMED CONSENT PROTECTION OF VULNERABLE POPULATIONS Beneficence Beneficence is about protecting people from harm and undue risk Research must have potential for benefit (to the individuals participating and/or to society) in order to justify any risks Psychologist must weigh the risks and benefits of a study Milgram Experiment (unethical though...) Breach of confidentiality COST-BENEFIT ANALYSIS FOR PARTICIPANTS COST-BENEFIT FOR SOCIETY Justice Risks and benefits should be balanced across groups and individuals Your sample should be representative, not biased If the participants are recruited from a particular group (ethnicity, gender, socio-economic status) but another group of people stand to benefit, then the study violated the principle of justice HOW PARTICIPANTS ARE SELECTED DO THEY REPRESENT THE PEOPLE WHO WILL BENEFIT FROM THE STUDY?

Explain informed consent and the protection of vulnerable groups (applying the principle of respect for persons).

The informed consent: each person learns about the research project, knows the risks and benefits, and decides whether to participate. the protection for vulnerable groups included children, developmental disabilities, and prisoners. (p.96)

Give examples of ways that researchers dig deeper by doing more than just one study on a research question.

They ask more questions (pg 16-17). Ask about effects, how frequently, compare and contrast, etc.

Explain why experimenters usually prioritize internal validity over external validity when it is difficult to achieve both.

They want a clean manipulation, but being in a lab may cause participants to be unrepresentative. They sacrifice real-world rep. for internal validity to find results without confounds.

Use the three causal criteria to analyze an experiment's ability to support a causal claim.

Three Criteria for Causal Claim Covariance Temporal Precedence The independent variable is manipulated before the dependent variable is measured Internal Validity

Criteria for Causal Claims

Three Ingredients of Causal Claims Covariance - there is an association between A and B If there is no associative relationship between the two variables, obviously there is no causal relationship Temporal Precedence - A comes before B in time Post hoc, ergo propter hoc (after therefore because of) Internal Validity - There are no other possible causes for B except A In other words, there are no confounds Each by itself is necessary but not sufficient. A causal claim requires all three together The best way to establish causation is by conducting an experiment, in which one of the variables is manipulated.

Identify three types of reliability (test-retest, interrater, and internal), and explain when each type is relevant.

Three types of reliability Test-retest - take the test again, get a similar score (reliability over time) People get consistent scores each time they take the test Consistency across time is a big part of reliability What measures would this apply to: self-report, observational, physiological Internal - people get consistent scores on every item in a questionnaire (part similar to the whole) People get consistent scores on every item in a questionnaire The parts are similar to the whole Interrater - two observer's rating are consistent with each other Different observers' ratings are consistent with each other What measures would this apply to: Some observational, mostly those involving human raters of behavior

Explain how researchers might evaluate the risks and benefits of a study (applying the principle of beneficence).

To protect the subjects from harm. There have to be some potential benefit to outweigh the risks.

Describe the difference between the validity and the reliability of a measure.

Validity of a measure: this is measuring what it claims to measure Reliability of a measure: CONSISTENCY, can be repeated, must have reliability in order to have validity

Theory

a set of statements that describes general principles about how variables relate to one another

Explain why it is more important, when assessing external validity, to ask how a sample was collected rather than how large the sample is.

after a random sample size of 1,000, it takes many more people to gain just a little more accuracy in the margin of error. That's why many researchers consider 1,000 to be an optimal balance between statistical accuracy and polling effort. A sample of 1,000 people, as long as it is random, allows them to generalize to the population quite accurately. In effect, sample size is not an external validity issue; it is a statistical validity issue.

Confound

alternate explanation for an outcome-our personal experiences are heavily confounded because so many things are going on at once

Interrogate the construct validity of an association claim, asking whether the measurement of each variable was reliable and valid.

ask questions about the operationalization of the variables measured

Causal

attempts to establish that one variable causes the other One variable manipulated Causal language Three ingredients of causal claims

- Recognize the difference between a conceptual variable and its operationalization.

conceptual: the researchers definition of the variable in a question at a theoretical level. operationalization: researchers specific decision about how to measure or manipulate the conceptual variable

Explain why control variables can help an experimenter eliminate design confounds.

confounds can be turned into control variables, so a design confound (one created by the experimenter) can be eliminated

Identify face and content validity.

face: how well a measure looks like it measures what it says its measuring content: A measurement device ability to be generalized to the entire content of what is being measured (Does all of this cover everything about intelligence?)

Construct Validity

construct is an abstract idea-- how well have we taken a variable and operationalized it to make it measurable-- does the study measure what it claims to measure? I don't think they are measuring their variable right

Frequency claim

construct validity is important, also external validity, statistical and internal validity not super important

Identify criterion, convergent, and discriminant validity.

criterion: does it correlate with key behaviors? convergent: scores obtained from one procedure are positively correlated with scores obtained from another procedure that is already accepted. discriminant: measures of consumers that theoretically should be related to each other are, in fact, observed not to be related to each other

External validity

do my results apply to the research population? Is it true for the greature population. Would this apply to different populations-- not just groups of people but also geographic locals. About the people in your study-- who was in your study , how was it conducted

Estimate results from a correlational study with two quantitative variables by looking at a scatterplot.

effect size/ Cohens D/ r stat

List the forms that research-based information can take

empirical journal articles, review journal articles, books, and chapters in edited books. Meta analysis- combines results of many studies and gives a number that summarizes the magnitude or effect size o the relationship Parts of an empirical research article: Abstract Intro Method Results Discussion

Explain why a random sample is more likely to be a representative sample and why representative samples have external validity to a particular population.

every member of the population has an equal chance of being in the sample; because all members of the population- no matter whether they are close, easy to contact or motivated (are ALL equally to be represented)

Interrogate two aspects of external validity for an experiment (generalization to other populations and to other settings).

random sampling: generalization to other people -multiple studies: generalization to other situations

Be cautious about accepting the conclusions of authority figures (especially conclusions not based on research).

researchers get to see every side of possibilities vs your own experience is you only see one possible condition. research is better evidence bc consistent results from several studies means that scientists can be confident in the study than experience or intuition (using our hunches about what seems "natural", or attempting to look at things "logically")

Apply the correlation coefficient, r, as a way to describe the direction and strength of a relationship. (In this chapter, r is relevant as a common statistic to describe reliability and validity.)scatterplot

slope direction: direction of the relationship (neg,pos) strength: how close the dots are to the line tells you the strength and direction of relationship The closer to one- the stronger the correlation coefficent is-- sign only indicates direction Want a strong positive correlation Can use correlations to evaluate the reliability of a measure

Distinguish an association claim, which requires that a study meet only one of the three rules for causation (covariance), from a causal claim, which requires that the study also establish temporal precedence and internal validity.

ssociation only needs covariance (study's results; the extent to which two variables are observed to go together) - Causal (study's method) needs covariance, temporal precedence (means it comes first in time, before the other variable) and internal validity (the third-variable criterion, is an indication of a study's ability to eliminate alternative explanations for the association )


Set pelajaran terkait

Karch - Ch49: Drugs Used to Treat Anemias

View Set

Central Nervous System: True or False

View Set

A driver must yield the right of way

View Set

2.2 Describing a House, Saying What Needs to be Done

View Set

NURS 319 - Nursing Informatics: Chapter 13

View Set

HIST 1003 midterm quiz questions

View Set

GEL 111: MODULE 1 - Pre-Lab Questions (Igneous Rocks)

View Set