PS170 Cumulative Final

Ace your homework & exams now with Quizwiz!

Human subjects

"Human subjects" means a living individual about whom an investigator (whether professional or student) conductingresearch obtains: 1. Data through intervention or interaction with the individual, or 2. identifiable private information - ex. You design and field an anonymous survey with noidentifying information. . .→Yes, because there is interaction (communication of a sort between researcher and subject, and potentially intervention) - You analyze politicians' communication styles via televised speeches. . .→No, because there is no interaction, intervention, or private identifiable information ex. You get access to a government database that containsonly dead peoples' addresses and date of birth. . .→No, because no interaction or intervention, and humansubjects research applies to living people - You analyze a government database that contains livingpeoples' addresses and date of birth. . .→Yes, because although there was no interaction or intervention, you are working with identifiable privateinformation

"Natural" Experiments

(not true experiments, but close!) - leverage natural randomization - hard to "find," but can help to answer lots of big questions using real behavior and treatments - Natural experiments are experiments in which the intervention is not under the control of the researcher (sometimes called "found" experiments or "pseudo-experiments") - the interventions are described as "quasi-experimental" because they occur "such that the analyst can separate observations into equivalent treatment and control groups" →equivalent: identical except for the treatment

Tannenwald's Theory

- "simple deterrence" cannot explain the puzzle - Why haven't countries used nukes when there was no possibility of retaliation? - Other "material" theories (e.g., long-term effects) could possibly explain it - She argues for nuclear taboo: a normative prohibition on the use of nuclear weapons - can co-exist with material causes - Goal is to test whether the taboo has a causal effect - does not require ruling out all other explanations, just showing that a taboo also operated - She points out three "normative effects" of the taboo: - Regulative/constraining: norm against using nuclear weapons first- - Constitutive effects: creating categories of weapons and shaping the idea of what is a civilized state does - Permissive effects: turns our normative attention away from other destructive weapons because we focus on "WMD" - It's very difficult to measure "norms" so Tannenwald turns to small-N process tracing - Looks at nuclear use and non-use by the U.S. - lays out the kind of evidence that would indicate a causal effect of norms vs. other types of explanations - An implication: people talk and act as if they believed a taboo exists - What kind of test would we say this is?- probably a "straw in the wind" test because it's only suggestive - only suggestive because people might talk that way for social desirability reasons, not because they are actually constrained by a taboo - though this is debatable: what if talking about it for any reason is actually constraining?

Audit study

- (sometimes called "resume study"): A study that examines racial and other forms of discrimination by sending matched pairs of individuals to apply for jobs, purchase a car, rent an apartment, etc. - The only thing that differs between the subjects is the thing you are studying race/gender/religion etc.

Trachtenberg's Method: 1941 Case

- A method of learning about history by critically analyzing secondary sources (and sometimes eventually moving to primary sources). The steps: 1. Figure out what the most important secondary books are on a topic 2. Read those books. For each book: - think carefully about the logic of the argument - think carefully about the evidence - draw conclusions about who is more persuasive 3. Where appropriate, go back to the original sources and think about whether they are being interpreted appropriately - this is a way to get information without wading through the archives yourself →In short: analyze secondary sources very critically!

between vs within subjects

- Between-subjects - individuals get assigned to groups, and one group getstreatment, one gets control- we compare across groups or people- we learn about an "on-average" causal effect but not about individuals - Within-subjects - each person serves as both a treatment and a control- we compare outcomes for each individual- we learn about individual-level causal effects

Straw-in-the-wind

- Can increase plausibility of a hypothesis (or raise doubts), but are not decisive by themselves - provides neither necessary nor sufficient evidence for accepting or rejecting hypothesis - weakest of the four tests, but still provides valuable initial assessment - e.g., consider the bill found in Straker's pocket (plus Straker's wife's ignorance about dress) - lends weight to suspicions about Straker, but not persuasive on its own (there are explanations that are plausible for it)

Treatment vs Control group

- Create two groups, the treatment (T) and the control (C) - Randomly assign (coin flip!)subjects to a group (either T or C) - Expose T to treatment, and C to nothing (or some baseline condition) - Measure outcome (Y) - Compare outcome measure for T group and C group →if experiment is well designed (e.g., "balanced"), you can rule out influence of confounders and establish that the treatment causes the difference in outcomes between T and C groups - treatment and control groups are similar (really, identical) on every dimension except for the intervention →this means similar on observed variables (e.g., age) but the power comes from being similar on unobserved variables (e.g.,emotional state)

Verifiable data

- Data: information recorded from observation. In order to be verifiable, must be observable to others - AKA "empirics" or "evidence" : evidence based on and verifiable by observation or experience rather than theory or pure logic - Allows others to check your evidence (transparent in that we can reproduce the work that others do)

Survey Experiments

- Field a survey and randomly manipulate some aspect of the survey(e.g., describe some problem with either "mortality" or"survival" rate) - used when you are most interested in preferences or beliefs; inexpensive - Embed an experiment in a survey instrument by randomly assigning different versions of the survey to different people - In short, you administer random treatments via survey, see whether they respond differently - This approach is growing in popularity given the low cost of internet surveys

Get out the vote field experiment

- First work on effect of campaign spending on elections outcomes used basic regressions (and assumed no omitted variables or confounding!) - work like this often found that campaign spending decreased vote share! - Sample: - obtained complete list of all registered voters, and made a list of all households with one or two registered voters - attempted to find and delete student mailing addresses, were left with 29,380 - Design: - Conditions: Personal Canvassing (2), Telephone call (2),Number of direct mailings (4) - 2 X 2 X 4 experimental design (factorial!) - If you have more than one experimental manipulation, they are (ideally) supposed to be randomized independently - i.e., whether you are in a particular personal canvassing condition shouldn't affect what telephone call condition you are in!

incentives of researchers and peer review

- In addition to wanting to discover scientific truth. . . - They want a job - They want tenure, continued employment,nrecognition/raises →These both require peer-reviewed publications - best peer review: - (In addition to research that appears scientifically rigorous ). . . - Research that has statistically significant results - Research that is surprising or counter-intuitive →So researchers have incentives to find novel, statistically significant results

Internal vs external validity

- Internal validity: Did we accurately identify the causal effect we care about? Are there factors other than the intended independent variable that could be responsible for the outcome? could the treatment have manipulated something other than the intended IV? - External validity: extent to which experimental findings (specifically: causal effects) may be generalized to other settings, measurements, populations, and time periods. Convenience sample could be an issue here of being nonrepresentative. In general, the artificiality of the setting is a weakness of lab experiments

Fundamental Problem of Causal Inference

- It's the problem caused by the fact that researchers never know the counterfactual. A researcher can never be absolutely certain that the treatment has caused the outcome without knowing the counterfactual. - ex. we only observe plants that either had music or didn't (one state of the world at a time) - Experiments solve this problem in the following way: By randomly assigning units to treatment/control, we make those two groups equivalent (identical) in expectation - The only thing that is different is the treatment, so we can learn about the average treatment effect by comparing the mean between the two groups

Field Experiments

- Randomly assign a treatment "in the field"(e.g., who gets information about whether their neighbors voted) - typically measure real-world behavior as the outcome; oftentimes subjects do not know they are part of experiment - Field experiments seek to combined the internal validity of randomized experiments, with increased external validity - this is accomplished by conducting the experiment in a real-world setting - another unique feature is that subjects are (sometimes) unaware that they are taking part in an experiment - A study that has all of the elements of an experiment (like random assignment to treatment) but that is carried out in a real-world setting or naturally-occurring environment, rather than in a lab → DV is often some real world behavior

Lab Experiments

- Recruit subjects to a lab setting and manipulate a treatment(e.g., who gets applause) - best when you need experimental control or to measure something that can only be done in a lab (e.g., heart rate) - units of observation randomly assigned - treatment effects are measured by comparing outcomes across groups - Lab experiments are used to maximize internal validity by increasing experimental control - lots of experimental control (e.g., no phones) - subjects are usually convenience samples - often, there is an element of interaction, unusual measurement (e.g., skin conductance) or deception (mostly inpsychology studies) - often concerned with behavior rather than beliefs

Research

- Research: systematic investigation designed to yield generalizable knowledgeI Would include anything from a pilot project to a full-scale study - Does not include, for example, classroom activities to teach methods - These are not designed to yield new generalizable knowledge, just to teach existing knowledge

Money and influence in congress field experiment

- Researchers worked with a (real) political group -The group tried to schedule meetings between local campaign contributors and Members of Congress in 191 congressional districts - Randomly assigned how it described who would attend the meetings: - Treatment: "active political donors" - Control: "concerned constituents" - When attendees were described as "donors" they often got to meet with Members of Congress, Legislative Directors, andChiefs of Staff. - Not so when they were described as "local constituents" - Access to high-level people less than a third as often!!

Collection of written materials

- Sometimes the written materials you want simply are not available - We talked about archives already - you may need to travel to another site to access archives - Or you may need to gather pamphlets, flyers, etc.

Systematic Observation and Analysis

- Systematic: methodical, organizes, orderly; follows a clear and justifiable series of steps - As opposed to unsystematic, disorderly, ad hoc - Minimizes bias and error, makes it easier for others to replicate your work Clear steps that someone else could follow

Interview techniques

- Tape and transcribe all interviews - Build rapport by asking non-threatening questions first - Be flexible with your set of questions - some close-ended questions may not end up working very well - Pay attention to what is said in between questions

Participant observation

- The researcher becomes a "participant" in the culture or context being observed - They then try to observe the world through the eyes of a member of that culture - Very tricky: you have to learn how to enter the context without changing it with your presence - Often requires months or years of intensive work because the researcher needs to become accepted as a natural part of the culture in order to assure that the observations are not affected by the researcher's presence - Participant observation is often associated with anthropology ex. Margaret Mead living among other cultures who she had thought of as "primitive"

Manipulation Check

- To ensure that the treatment had the intended consequence, and no unintended consequences, researchers often ask manipulation check questions: - Procedure used to provide evidence that participants interpreted the manipulation of the independent variable the way intended - So, for example, if researcher randomly assigned subjects to either a fear or control condition, they might ask them after to "rate" their current mood/emotions to make sure the manipulation worked - In other cases, might ask respondents to recall portions of the experiment to make sure that they remember what they read

Direct observation

- Try to unobtrusively observe a phenomenon - watch, rathert han take part - For example: take video, observe protests, ride subways, go to public meetings

The Milgram and Zimbardo experiments

- Tuskegee Study is obviously horrific, but psychology as a field didn't fare much better - Many psychology studies in the 1950s and 60s have grown infamous for designs that now seem morally dubious (at best) - Among the worst offenders are studies like the Milgram study of obedience and the Zimbardo study on authority

Process-Tracing

- a method for analyzing single cases- also called: "within-case analysis" → different from across-case comparison logic as exemplified by Mill's methods - you are attempting to observe the causal process in action, paying particular attention to the sequence of events → this requires generating additional implications of your theory → addresses causal mechanisms → addresses concerns about reverse causation (know the 4 types)

interpreting ambiguous information

- ambiguous information is typically interpreted in a manner to fit with our preconceptions - for example, we typically have associations with colors (true throughout the world) - when interpreting ambiguous data (e.g., a high-speed tackle infootball), colors of uniforms can make that play seem more orless aggressive →in these cases, referees "see" what they expect to see based on preconceptions relating to the color of the uniforms

doubly decisive test

- confirms one hypothesis while eliminating all othersI single tests that do this are rare in social science - provides necessary criterion and sufficient criteria for accepting explanation

Logical Reasoning

- describe how you come up with your theory - Inductive: takes it cue from reality rather than how you think things ought to work - Deductive: start with theory

Audience costs

- idea that leaders suffer "domestic audience costs" if they issue threats (or promises) and then fail to follow through - logic: the public prefers not making threats/promises at all to doing so and then backing down - implication: audience costs (if they exist) should discourage leaders from making empty threats/promises for fear of losing public support or being voted out of office - empirical work is indirect; scholars look for "second-order implications" of audience costs →e.g., if audience costs exist, they should be different for democratic and autocratic states. . . →and if that's true, then we should see differences in the foreign policies (something that's observable!) of those two kinds of regimesI but even if we observe differences between autocracies and democracies, it's not clear if those differences are due to audience costs! - can be hard to determine: we may never actually observe them - even if we don't observe audience costs often in history, we can study them using survey experiments!

complexity because (and examples)

- in a system (units/elements are interconnected), chains of consequence extend over time and many areas - we often expect linear relationships: if giving people a $10 bonus every day makes them 20% happier, giving them double should make them 40% happier! - but there are often "diminishing returns" to scale -the effect of one variable might depend on another (an interactive effect) - democracy might lead to more peace, but only in the presence of non-democracies - behavior changes the environment in which we act - leads to miscalculation by leaders when they don't take this into account

Field Research

- leaving your institution to collect data or information for a research project - location can be anywhere - the form of research (e.g., interviews or participant observation) depends on the research project - but one constant is that you are trying not to influence human subjects you are studying - good for when you can't get the information you need in a library - That could happen for a variety of reasons: - written materials are biased or nonexistent, so you need to interact with/observe people directly - you need written materials, but they are not available outside the country

Smoking Gun Test

- more demanding than "hoop test" - lack of a smoking gun doesn't imply innocence, but possession of one is not good - provides sufficient criterion for accepting explanation, but is not necessary

Hoop test

- more demanding than "straw-in-the-wind"I hypothesis must "jump through the hoop" in order to remain under consideration - doesn't provide sufficient criterion for accepting explanation, but is necessary - by implication, it cannot confirm a hypothesis, but can lead you to reject it

how we filter new information

- not like robots - enormous tendency for expectations, preconceptions, motivation and prior beliefs to influence our interpretation of new information - when we examine new evidence, information consistent with our beliefs is accepted at face value, while we interrogate, scrutinize and discount information that contradicts our beliefs

inductive logic

- reasoning in which the conclusion is implied by, but goes beyond, the evidence at hand and, hence, may or may not be true - bottom-up approach - data, to empirical patterns to theory - usually for qualitative research 1. get data 2. look for patterns 3. formulate theory that fits data 4. test the theory on new data

deductive logic

- reasoning in which the conclusion necessarily follows if the evidence is true - theory to hypothesis to data - usually for quan research 1. develop a theory starting from first principles 2. generate hypotheses from the theory 3. get data 4. test the theory on those data

unit of analysis

- the cases or entities you study, the unit of observation - What are you actually trying to measure? ***- common ones; individuals, schools, states, countries, etc.

comparative historical analysis or comparative case study

- the logic is similar to large-N analysis: you select cases that vary on level of IV and see whether IV predicts variation in DV - except you can't really control for confounders or establish a correlation with a high degree of confidence - remember, controlling for confounders in small-N means picking multiple cases that are similar on confounders, but differ on IV - so researchers often turn to process-tracing in single cases or across multiple studies (to play to strengths of small-N methods) -

implications of complexity

- the standard "scientific" approach is to change only one variable at a time to assess causal effects - "ceteris paribus" assumption ("all else equal") - but don't forget to take into account . . . - most behaviors or actions will have multiple effects- actions might impact environment, but actors will thenrespond to new environment

Uses of theory & definition

- theory: an interconnected set of propositions that explains how or why something occurs - help us understand and explain; things that have already happened, predict what might happen, help us find interesting puzzles worth of study -theories simplify reality

Inteviews/surveys (types and + and -)

- unstructured interviews - structured interviews (with or without closed-ended questions) - in-person survey might be in the form of a structured interview - or you might just have a list of questions Advantages - It's a very direct method: you can ask exactly what you want to know about Disadvantages - You may not get honest answers - You may not get access - Many of the same problems as survey research- e.g., honesty, recall, representativeness of sample, selection bias, non-response bias - Even if sample is representative, it may be too small to detect a meaningful "effect"

interpreting UNambiguous information

- when information is unambiguous, the way bias operates isdifferent - for example, we rarely interpret unambiguous information inthe exact opposite way it should be (after all, it'sunambiguous!) - instead, bias operates by compelling us to apply more scrutinyto new information, dissect it, question it, etc. - example: pro- and anti-capital punishment individuals exposed to two studies of its deterrent efficacy (one positive, one negative) - people understand implications of the studies perfectly well,but they change their beliefs about how convincing research is, how well done, etc.

Framing effects

- when people give different answers to the same problem depending on how the problem is phrased (or framed) - Termed "preference reversals" - Problems are logically (and normatively) equivalent - Certainty of saving 200 people is disproportionately attractive - Doctors make same mistake (mortality/survival rates)

Why study politics scientifically?

- when we vote, we rely on causal relationships about how the world works - many of our assumptions are wrong - science is a method (NOT A SUBJECT) that can help us evaluate our assumptions

Scope Conditions

- where does your theory apply - temporal: does the theory hold only in a particular time period - spatial; does the theory hold only in a particular place

Mill's method of agreement

1. "agreement" on dependent variable (same value) 2. IVs are dissimilar except for one, which is the same → here, we learn about whether A causes Y (as long as we have not left out an important IV or confounder!)

problems with archives

1. Access - ex. Most democracies make a lot of information available (within limitations -for example, exceptions for national security) but not non democracies 2. Incomplete records - Individuals/governments may choose not to (or may be pressured not to) record sensitive information - Or, actors can later claim that their records are incomplete 3. Redaction of records - to redact: to select or adapt (as by obscuring or removing sensitive information) for publication or release - individuals and governments do this all the time 4. Interpretation of information - Scholars rarely keep track of the entire "universe of documents" - how do you know they didn't cherry-pick? - You need to know a lot of background to interpret a document correctly - scholars can also have motivated biases - one possible (partial) solution: active citation →rigorous, annotated primary-source citations hyperlinked to the sources themselves

Challenges to internal validity

1. Double-Barreled Treatments- if some respondents see "prosperous democracy" and others see "poor authoritarian country". . .- ideally, the wording would make it such that ONLY ONE THING is changing in between experimental conditions 2. Information Leakage- you are manipulating some feature of the world in survey experiment- but what if manipulating that one feature (e.g., regime type) changes other beliefs?- For example, does saying a country is an autocracy make you think. . . it's in a particular part of the world?

Factorial Design

1. Factorial design: An experiment in which two or more variables (factors) are manipulated - Factor 1: Race (Black or White) - Factor 2: Object (tool or gun) 2. Because there are 2 variables/factors with 2 levels, this is a 2x2 factorial design- One "place" for each variable. - (# levels Factor 1) x(# levels Factor 2) x(# levels Factor3) etc - One "place" for each variable.- (# levels Factor 1) x(# levels Factor 2) x(# levels Factor3) etcI If you had 3 variables with 2 levels, its would be. . . 2 x2 x2 - If you had 2 variables each with 3 levels, would be. . .3 x3 - If you had 1 variable with 2 levels and 1 variable with 3 levels, it would be. . . 2 x3 - You multiply the numbers to figure out how many total experimental conditions or "arms" there are - E.g. 2 x2 = 4 possible conditions - Black person with tool - Black person with gun - White person with tool - White person with gun

advantages to qualitative research

1. For inductive theory building 1. Identify a research question 2. Identify cases that would help you develop some hypotheses 3. But remember: then you need new data to test the resulting hypotheses. - Don't evaluate your theory based on the case you used to develop it - ex should teachers have guns: You might start via induction:choose some classrooms/schools and observe them closely 2. When the research question involves a very rare event - If your question applies to a limited universe of cases, a small-N analysis may be your only option - ex. nuclear wars 3. When measuring the IV or DV is difficult - ex. Do the beliefs of individual leaders matter for foreign policy in democracies? 4. For investigating causal mechanisms - Sometimes, you can measure the IV and DVI But you want to verify that the causal mechanism is what is explaining the correlation

Checklist for establishing causation

1. Is there a correlation between X and Y? - correlation: an association between 2 variables 2. Can we rule out reverse causation? - Possibility that Y could cause X 3. Is there a credible causal mechanism? 4. Have we controlled for all confounding variables? - confounding variable: a variable (Z) that is correlated with both the IV (X) and the DV (Y) and that somehow alters the reltionship between the two

4 hurdles of causality

1. Is there a correlation between X and Y? 2. Is there a credible causal mechanism? (small-N good here) 3. Can we rule out reverse causation? (small-N good here) 4. Have we controlled for all confounding variables? → When N is small, you may draw the wrong conclusion → You can't fix this with "controls" in small-N research

Mill's Joint Method

1. Method of agreement. . .. . . to provide evidence that C is a sufficient cause of E "sufficient": C is enough to get E on its own answers question: are land shortages enough to cause peasant revolts? 2. Method of difference. . .. . . to provide evidence that C is a necessary cause of E " necessary": can't get E without C answers question: do all peasant revolts require land shortages to precede them?

Scientific Method

1. Observe some aspect of the universe (in this case, something related to politics) 2. Generate a hypothesis about some causal relationship: alterntative one that explains what you observed 3. Use the hypothesis to make predictions 4. Test those predictions by experiments or further observation or data collection 5. Repeat: replicate, question, and redesign

Tips for getting honest answers in interviews

1. Promise to keep responses confidential 2. Emphasize the importance of honesty for academic research("priming" honesty) 3. Signal that people have a wide range of opinions about these topics, thereby reducing social pressure to give a "correct" answer 4. In addition to asking what respondents themselves think, we might ask what other people would think/do 5. In a semi-structured interview, allow subject to bring the sensitive topic up first 6. If you have a large enough sample, use something like a "list experiment"

Types of natural experiments

1. Randomizing device (with a known probability) divides a population- e.g., lottery!- political scientists have examined effect of $ on political attitudes(lottery-induced affluence increases hostility towards estates taxes, but has little effect on broader political attitudes) 2. Jurisdictional studies- make use of geographic divisions to study similar populations that find themselves by chances on opposite sides of some divide 3. Omnibus ("other") category- e.g., effect of bad weather on economics

Pitfalls of case selection

1. Selecting on the DV - Also known as "sampling on the DV" - This is when you choose cases to study based on the value of the dependent variable - ex. Suppose you want to understand why people become domestic terrorists, So you go to a prison and interview several individuals convicted of terrorism. They tell you stories about poverty and family dysfunction → You conclude that poverty and family dysfunction cause terrorism - In order to determine whether there is really a relationship (and how strong it is),you should also interview people who did not become terrorists, and see how they differ 2. Selection bias/selection effects → Note: these issues could apply to large-N analysis too - (Natural) selection processes screen out cases whose values on key variables are above or below some implicit threshold - The result is a pool of observed cases whose values are abnormal when compared with the actual underlying population - When we analyze data that (knowingly or unknowingly)contain data subject to selection effects. . . - We end up with selection bias- Our conclusions are biased because the sample is biased.

4 types of process-tracing tests

1. Straw-in-the-Wind 2. Hoop tests 3. Smoking Gun test 4. Doubly-decisive

2 ways of generating observable implications

1. about the IV/DV: "IV increases the DV". 2. About the causal mechanism; you can also form a hypothesis about something you should observe about the causal mechanism (or process)

4 key components of theories

1. expectation (or prediction or hypothesis): relates your explanatory factor to your outcome causally. Includes IV and DV 2. causal mechanism; tells you why that causal relationship exists 3. assumptions; make explicit what things must be true for your theory to make sense 4. scope conditions: tell you when and where your theory applies

4 techniques for field research

1. interviews and surveys 2. participant observation 3. direct observation 4. collection of written materials - NOTE: all of the same concerns about selection bias, spurious correlation, reverse causation, etc., apply here(but only when you are testing causal theories, not when you are generating them)

Mill's method of difference

1. outcome (Y) must be different across the two cases 2. IVs are similar except for one, which is different → here, we learn that A is associated with Y and conversely that −A is associated with −Y

Natural experiments of history

1. some "perturbation" (treatment) is applied- initial conditions don't matter as much here- might compare treatment/no treatment (e.g., areas of Africa subjected to slave trading)- or compare different types of treatments (e.g., two halves ofHispaniola colonized by France/Spain) 2. no exogenous treatment, but different initial conditions- Pacific Islands differing in geography settled by single colonizing group →focus is almost always still on explaining differences in outcomes(though sometimes it can be of interest when outcomes are similar!)

how research can end up flawed and the solution

3 systematic ways that research can end up flawed - Fraud: falsifying data - Intentional p-hacking: Broadly, the practice of reanalyzing data in many different ways and only presenting the preferred results- AKA "fishing expeditions". Results can be spurious correlation: a correlation that is not what it appears to be - Unintentional bias: motivated bias & garden of forking paths: The unconscious tendency of individuals to fit their processing of information to conclusions that suit some end or goal". →Researchers often don't have a clear plan going in, and could justify lots of different choices. They often muddle theirway through, trying lots of options. →But without a clear research plan, you could end up with cherry-picked results. Solution: transparency - Preregistration: publicly posting (in some agreed-upon database) a set of plans for research AND analysis - prevents p-hacking and garden of forking paths problems(intentional and unintentional putting your finger on scale) - increases our confidence in findings that we do see get published - Replication: new studies designed to see if they can get same results as original studies. . . - more replications are always better - even better with different samples, or types of studies, or measures, or even designs

+ and - of participant observation

Advantages: - By becoming "part of" the culture, you gain access you otherwise would not have - You may see things that you didn't even know you were looking for (great opportunities for induction) Disadvantages: - Your presence may alter the phenomenon you are studying - This is an inherently subjective method, and it can be hard to be objective when you come to know people intimately →one partial solution is to be very self-reflective and to take copious and honest notes

+ and - of Direct Observation

Advantages: - Easier to do than participant observation - Your presence is less likely to affect the outcome because you are not actively participating Disadvantages: - Depending on your research question, you may not get useful or unbiased information - Like participant observation, there is a subjective element to a researcher observing things

Assumptions of Mill's methods

All research designs have assumptions (and tradeoffs) built into them. . 1. assumes we have a full list of candidate causes to begin with 2. assumes multiple causation not a problem (one cause of each effect) → if these conditions do not hold(and they often do not!), we are learning about associations, not causes

Direct or nonparticipant observation

Can be overt. . . - e.g., record families in their home (with consent), listen to them talk about politics - observe behavior at local political meetings to see how people actI . . . or "covert" - when a researcher conceals their identity (e.g., LaudHumphrey's (1970) research on sexual behavior in bathrooms) - big drawback here are ethical concerns!

Dependent variable

DVs are the outcome; the thing you're trying to explain Y

bivariate data

Data with two variables (X causes Y)

Counterfactual logic

If it had been the case that C (or not C), it would have been thecase that E (or not E). - examining one case - Suppose you believe C was a cause of E. . . 1. you can search for actual cases that resemble your case, except in these new cases: - C is absent (or different) - then, you can check to see correspondence between C and E in all cases 2. you can imagine that C was absent, and ask whether E would have occurred in that counterfactual case - Counterfactuals are necessary for all types of research, including cases, large-N, etc.

University IRB

Institutional Review Board - University body that reviews all research that involves human subjects to make sure it conforms to federal, state, guidelines - Most universities have their own IRBs - If you collect data without IRB approval, you are violating federal law - Jeopardizes your and your university's funding BUT - IRB process designed to protect subjects, so can neglect risks posed to researchers →e.g., fieldwork that could be dangerous - designed to protect individuals, so does not address more diffuse harm →e.g., large-scale field experiments in small populations may increase polarization (or even alter outcomes in elections)

Observational Research vs Experimental

Observational - non-experimental, correlational investigation where the value of the IV arises naturally -"Nature" (not the researcher) assigns the value of the IV. - Even if we have done our best to measure and control for all confounders, we can't rule out those we didn't think of or couldn't measure Experimental - In experiments, you randomly assign the value of the IV - Random assignment: assignment of subjects to experimental conditions by means of a random device like a coin toss - Advantage: then you know that the level of the IV is independent of any other factors, allowing you to rule out confounding - helpful for the last 3 hurdles to causation, but not the 1st

Probalistic theories

Probabilistic theories of causality usually try to characterize or analyse causality in terms of these probabilistic dependencies: they try to provide probabilistic criteria for deciding whether A causes B, and often maintain that causality just is the corresponding pattern of probabilistic relationships. - social science theories are more like theory of "natural selection" - if our theory is that wealth causes political ideology, finding a case where it does not invalidate the theory - instead, we are interested in "on average" effects

Causal Mechanism

Provides a specific chain of steps, series of links, or other specific accounting of how or why changes in the causal variable (IV) affect the outcome variable (DV) - ex. "strict voter ID laws reduce voter turnout among low-income citizens" is missing the explanation of why this occurs. Possible causal mechanism: low-income citizens are less likely to have valid IDs because costly to obtain, so they vote at lower rates

Tuskegee Syphilis Study

Research study conducted by a branch of the U.S. government, lasting for roughly 50 years (ending in the 1970s), in which a sample of African American men diagnosed with syphilis were deliberately left untreated, without their knowledge, to learn about the lifetime course of the disease.

ommitted variable bias

The correlation we see between X and Y is biased because we didn't control for Z

limits of Trachtenberg's Method

You are limited by what secondary sources have been written: - Authors may not have covered your specific sub-topic extensively - Authors may not have used all of the available archival evidence - Authors may have had an ideological/political agenda or other blinders - Therefore, using this approach is not preferable, but can be useful if using archives is not possible

R

a measure of the strength and direction of a linear relationship between 2 variables r varies between -1 and 1 +1 = strong positive correlation 0 = no correlation -1 = strong negative correlation (right to left on a scatterplot) - R^2 a measure of fit that indicates (approx) the proportion of the variation in the DV explained by the IVs - ranges for 0 (poor fit) to 1 (close fit)

hypothesis

a statement of the relationship between the dependent variable (DV or Y) and independent variable (IV or X) - observable implications

Counterfactuals

alternatives to what happened

Belmont Report

basic ethical principles for human subjects research: respect for persons, beneficence, and justice 1. Respect for persons incorporates: 1. individuals should be treated as autonomous agents, 2. persons with diminished autonomy are entitled to protection 2. Beneficence: do not harm, maximize possible benefits and minimize possible harms 3. Justice: e.g., selection of research subjects needs to be scrutinized in order to determine whether some classes are being systematically selected simply because of their easy availability, their compromised position, or their manipulability, rather than for reasons directly related to the problem being studied - whenever research supported by public funds leads to thedevelopment of therapeutic devices and procedures, justicedemands both that these not provide advantages only to thosewho can afford them and that such research should notunduly involve persons from groups unlikely to be among thebeneficiaries of subsequent applications of the research.

Deterministic Laws

cause and effect are more securely linked. Physical sciences "if X then Y"

Assumptions

claims or beliefs (often implicit) about how the world operates; things we take for granted about how the world operates things you assume in order to generate your prediction or for your causal mechanism to operate you should identify controversial/important ones ex. for Voter ID (theory assumes that IDs are costly to obtain. If IDs are free and can be ordered from home/work, the causal mechanism would not apply) - often theories make assumptions about who are the actors/decision makers in your theory, and what motivates actors and their decisions

Archives

collections of original, unpublished material or primary sources

we determine effects by...

comparing reality to a counterfactual world in which one thing has changed... but it's nearly impossible to change just one thing! - no way to know if the "obvious and immediate" effect is goingto be the dominant one - learning about the world is difficult!

Factual/Procedural question

describes the facts of the world

Counterfactual degrees of freedom

df : # observations − (number of IVs + 1) - basic idea: that the amount of independent information you have limits the number of parameters that you can estimate. - you want to have "positive" degrees of freedom for better estimates example: df = 100 observations − 1 IV and 3 "controls"= 96 degrees of freedom - example: df = 1 case − 1 IV and 1 "control"= −1 degrees of freedom

Independent variable

factor that affects or causes your DV aka predictor variable X

empirical questions

how the world is; how the world works ** good empirical research questions: - ask why things occur rather than just asking about basic facts that you could look up somewhere - avoid too many proper nouns (too specific, not generalizable) - begin with a puzzle or intriguing outcome - interesting implications for policy or history etc. shouldn't have super simple answer or be too narrow

normative questions

how the world should be

Multivariate

involving more than two variables (X and Z cause Y)

4 attributes of science

logical reasoning, theory, data, systematic analysis

for small-N what type of case selection is best

non-probability often better - but it's tough to know when a case will be representative; best to evaluate on a case-by-case basis - Small-N methods generally used to examine causal process(mechanisms and timing) - they can increase your confidence in the causal effect(IV →DV), but in a different way than other methods - whether you use a single or comparative case "design," you are probably not going to use random sampling, but instead try to choose with care - No matter what you do, you are probably going to need to generate additional observable implications about the causal process, and pay attention as well to the timing.

Empirical data

observable, objective data gained from the natural world

Code

to convert raw information into data using a veryspecific set of rules and categories to establish variable values"

motivational biases and positive illusions

we not only see what we expect to see, but particularly whatwe want to see ex. for example, watching a political debate, you're more likely tobelieve that your favored candidate won - + illusions: ppl believe flattering things about themselves - "Lake Wobegon" effect - one theory suggests we hold these beliefs because they satisfy important psychological needs (e.g., positive self-esteem) - other theories suggest purely cognitive basis for self-serving beliefs - maybe we believe positive things more likely to happen to us because we are more aware of our own efforts to bring about such experiences - very unlikely that one of these is totally correct or incorrect: they probably work in tandem

hypothetical questions

what might be in the future

The Hawthrone effect of field experiments

→ Hawthorne effect: being studied changes subjects 'behavior - if true, a huge threat to internal validity of some types of experiments - as it turns out, there was no evidence for the "Hawthorne effect" in the originalHawthorne study! → poor research design prevents us from being able to learn from that study - recent evidence suggests that there may be some minor effect of being observed, but not as large an effect as previously believed - still, these are very useful in contexts where you have a belief that being observed would change subjects' behavior


Related study sets

ECON chapters 2,3,7,11, and 17-19

View Set

Art Appreciation Exam 2 Sculptures/works

View Set

Ethical Hacking and Network Defense Chpt 4-6

View Set