CHS 741

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Why does having an instrument address the reverse causality problem?

-Approximately predicting randomization assignment

What are the three categories of study design?

-Experiment: randomized and as close to the counterffactual as we can get. (Lots of threats to internal validity) -Quasi-Experimental: observational but more like observing counterfactual -Pre-experiment: observational and lacks approximation of counterfactual

Name threats to Internal Validity?

-History -Maturation (or secular trends) -Testing -Instrumentation -Statistical regression to the mean -Selection -Experimental mortality

Donabedian Model to evaluate the policy?

-How do policies impact structures? -How do structures impact processes? -How do process impact health outcomes?

How is selection different between internal and external validity>

-Internal selection: how good is the counterfactual. If we are seeking certain characteristics based on our outcome than we are selecting -External selection: study samples (both groups) are very different from other circumstances in which your policy/program might be implemented. (Reverse causality)

Policy motivation for the NIKPAY Study? -What was the research question? -How did they test the parallel trends assumption? -What else did Nikpay do to establish causality?

-Looking at payer mix rather than just Medicaid ED visits -Looking at Medicare as a falsification test -Stratify on expansion tercile (Medicaid enrollment growth)

Examples of a natural experiment?

-Medicaid expansion -State Covid-19 stay at home -State Covid-19 business closure policies

Quasi-Experimental Study Designs

-On again off again: implement policy measure, take away policy and measure again. Address many threats of validity (+) however it may impractical to take the policy away (-) -Pre-post with a comparison (DD): the difference between the groups in the difference in the time is the treatment effect -Time series: difference in the trend before and after the treatment is your treatment effect. Counterfactual would be before policy. Addresses many threat of internal validity (+) but can be affected by history and instrumentation as well as selection (-) -Multiple time-series: the difference in the difference of the outcome over time between the two groups leads to the treatment effect. Can control for history (+) by introducing a comparison you are adding selection threat (-)

What data do we need to run in a DD regression?

-Outcome and predictors (Group and Time)

What is the difference between process and impact evalaution?

-Process evaluation: answering the question about how was the policy/program implemented -Impact evaluation: answering question about whether the policy/program had the anticipated impact./effect

Interpreting Qualitative data

-Process: content analysis or collecting overarching themes that help to summarize the content. -Impact: qualitative data cannot establish causality. But can also be useful in "telling your story" that may be more compelling (Advocate-evaluator role)

Define and provide an example of a counterfactual?

-is a philosophy because we cannot do both things at the same time -the only way to definitively know if your policy caused the outcome is to observe the absence of the policy and see if you also observe the absence of the outcome. Ex: randomization or observational data

Two fundamental questions we must ask regarding how to judge whether s policy or program worked?

1) is the policy/program working as intended? 2)How is the policy/program working?

Three stages of Evaluation?

Act 1: Developing/Creating a policy question Act 2: Conduct the evaluation to answer the question Act 3: Use the answers in decision making/politics

What do the coefficients represent on the difference in difference model?

Coefficients represent the slope in a linear relationship between outcome (Y) and independent variables (X)

Describe Act 2

Conducting the evaluation -Process evaluation -Impact evaluation

How can one conduct evaluation in a cultural context?

Encourage members to participate in all acts

When should you use qualitative analysis for evalaution?

During process or impact evaluation

How to find a good comparison group?

Look for natural experiments: -When policy gets implemented in a way that is unrelated to the outcome -If policy implementation is truly independent of the outcome, the parallel trends assumption is more likely to be met. To use a natural experiment for a difference in difference design you need the policy to both be implemented (treatment) and NOT implemented (comparison)

When conducting a pre/post with a comparison group how can researchers establish internal validity estimate of the true effect (big assumption)?

Parallel trend assumption: the outcomes in the treatment group would have looked exactly like the outcomes in the comparison group in the absence of the policy. Researchers try to convince themselves or question whether there isn't a meaningful selection bias. -Cannot test this directly requires a counterfactual

How do we know that the treatment group is "exactly" like the control group?

Randomization

ß3

Treatment effect

T of F: Quasi-experimental requires a coutnerfactual?

True

ß2

The impact of being in the treatment group vs. being in the comparison group pre-policy

ß1

The impact of being post-policy versus being pre-policy in the comparison group pre-policy

What did RAND HIE experiment find?

higher cost-sharing associated with reduced use, inappropriate and appropriate services, and better health status among low income

Define Internal Validity

reduce researchers' confidence in causality conclusions

Define External Validity?

reduce researchers' confidence that the findings apply to other contexts (generalizability or representative)

How do researchers observe "the treatment effect"?

the difference between the treatment group (b-a) and the pre-post difference in the comparison (d-c)

Define causation?

the relationship between two events, where one event (policy) results in the other event (outcome)

Why does adding a comparison group introduce selection?

you can introduce difference between the two groups and skew the data greater than it might have. The comparison group may have other differences with the treatment that don't make it a convincing counterfactual

ß2+ß3

the marginal impact of being in the treatment group versus the comparison group post-policy

ß1+ß3

the marginal impact of being post-policy versus pre-policy in the treatment group

Why can randomization be impractical?

Public policies are for everyone that is eligible (not a random sample of those who are eligible). There are ethical reasons why we don't leave possible benefits to chance. Exception: the Oregon health insurance experiment when expanded coverage was offered by lottery, thus randomizing those who got new coverage and those who did not

Can we randomize organizational policy?

Yes, depending on the nature of the policy (practicality) However, administering a larger randomized control study can be more costly and resource intensive to conduct than an observational study

Reasons for doing policy evaluation?

-Accountability: health care involves high levels of spending -Scare resources: don't want to use precious resources on policies that don't work -Different aspects of public health can be evaluated

What could complicate coming up with a good evaluation problem?

-Different political motivation -Ideological goals

What are three scenarios where ethical issues may arise?

-Decision makers do not want results published that reflect negatively on them and on how they want to conduct business. -Negative results suppressed because it may impact future funding -Randomized control trial rejected because it denies people who need access to whatever the policy/program provides

What about when you don't have a natural experiment?

-Demonstrate that your two groups have similar demographics for things you have data for. -Selection becomes a threat

Describe Act 1

-Develop a policy -Figure out the right question -Partner with stakeholders

What are the two types of validity to be worried about?

-Internal -External

What are potential roles of the Evaluator?

-Participatory: work closely with the decision maker -Objective: removed from policy/program enterprise except to apply scientific research methods to answer questions about policy/program

What did the RAND HIE do?

-Randomized people under 61 -3-5 year study -Administered surveys, comprehensive clinical exams, administrative utilization data

Give examples of external validity?

-Reactive arrangements or Hawthron Effects: results are somehow dependents on the experimental circumstances and would not have been observed in the wild -Selection: study samples (both groups) are very different from other circumstances in which your policy/program might be implemented. -Multiple treatments: treatment group was subject to multiple interventions, all not present in the comparison group, but if you did just the intervention of interest, researchers wouldn't necessarily get the same result.

How do we deal with reverse causality?

-Reverse causality: another relationship -Find an instrument that strongly predict policy, but is independent of the outcome, researcher can deal with the reverse causality -Use instrument to predict the policy and look at the impact of the predict values of exposure on the outcome

What is the Goal of Study design?

-Study design is used to establish causation

Name ethical principles

-Systematic inquiry -Competence -Integrity/honest -Respect for people -Responsibilities for general and public welfare

Experimental Study Designs

-The most important feature of this study design is randomization -The control group is used as a counterfactual -Threats to randomized study designs: instrumentation, testing, experimental mortality, cross-contamination

Give an example of how to control for history threat?

Add a comparison (not a control group) it should isolate the "effect" of the policy which should be the only difference between the two groups

Describe Act 3

Dissemination of results

RAND Health Insurance Experiment (HIE): what evaluation question did RAND HIE ask?

Does cost-sharing have an effect on health and other intermediate health outcomes?

Describe the relationship between qualitative and quantitative approaches?

Qualitative analysis can be hypothesis generating. In evaluation, this can be thought of as helping to figure out what outcomes we should measure using impact evaluation

β0

The average value in the pre-period in the comparison group (baseline average)

What is the alternative to randomization?

observational


Kaugnay na mga set ng pag-aaral

BIO 101 (Chapter 4 Energy & Metabolism) 9/12/17 TEST 2

View Set

chapter 18-the cardiovascular system: the heart

View Set

Map Reading and Land Navigation TC 3-25.26

View Set

Construction Principles II Final

View Set