RMDA Final

Ace your homework & exams now with Quizwiz!

Magnitude

used for dummy variables and is always interpreted in terms of percentage points. The size of something. Refers to the value of the coefficient. How big is it? Is it positive or negative? How would you interpret in the context of the problem? For example, if you have a B1=3, you would interpret this of the coefficient by saying a "a 1 unit change in x is associated with a 3 unit change in y."

Null result

- a finding in which the null hypothesis is not rejected

Statistically significant

- a coefficient is this when we reject the null hypothesis that is zero. A coefficient is this if we reject the null hypothesis. With larger data sets, substantively small effects can sometime be this. If a line of regression is flat, that tells us that there is no relationship between y and x. Beta would be zero f you pull numbers out of a random jar because its all just chance. But we are using real numbers in the real world. With social capital study, we want to rule out if its zero because there is no chance at a casual relationship. This if you rule out that your estimate is zero. Two problems with this: if you are at least two standard errors away from zero you are probably good but if further then that not good probably because its not representative. An analogy for it is that in order for medication to be on the market, it has a much higher standard than nutritional information. Three ways to determine if something is it: 1. If the p-value is less than .05, 2. If the t-statistic is greater than 1.86 and 3. If the confidence interval does not contain zero. No idea what the true value of beta is zero unless you are two standard errors away from zero. I think you are one of the numbers with zero when we look at the regressions. Will get this 5% of the time if the true beta is 0. Has to do with outside of two standard errors from zero, not some number that proves your study

Type I error

- a hypothesis testing error that occurs when we reject the null hypothesis that is in fact true. False positive. 05 chance of accepting this at a 95% rate. As you take more samples, they will fall around the true mean which means the p value accepts these types of errors. Small area on the end of the null hypothesis slope is the small chance you accept it. Occurs when the true beta is zero and you reject the null hypothesis (you mistakenly think you have enough evidence that it isn't zero. The problem is that we rarely know what type of error we are committing because we never really know if the true beta is zero or not. When we know the beta is not zero, a 0% chance of making this error because the true state of the world is that there is an association so you can't say there isn't one. Shading for this is at the tails of the distribution because these numbers can deceive us into thinking that the true beta is not zero

Standard error of the regression

- a measure of how well the model fits the data. It is the square root of the variance of the regression. How far the sample mean will likely be from the true mean, or how spread out sample means are from the true mean

External validity

- a research finding is this when it applies beyond the context in which the analysis was conducted. Random sample helps with us this by making the results more generalizable to the whole population.

Internal validity

- a research finding is this when it is based on a process that is free from systematic error. Random assignment helps with this. If an experiment has a strong one of these, than we can be convinced that we have found a casual relationship in the sample we're looking at but that does not mean that it has strong external validity and that it is generalizable to the population that was not a part of the experiment.

Epsilon

- the error term of the whole population

Error Term

- the term associated with unmeasured factors in a regression model, typically denoted as E. Comes to the rescue by giving us some wiggle room. It is what is left over after the variables have done their work in explaining variation in the dependent variable. It includes everything we haven't measured in our model. Not observable, which makes it a challenge to know whether an independent variable is endogenous or exogenous. Things that contribute to the outcome that we care about without us taking them into account. Would be underlying conditions if trying to determine that U.S. had worse covid response than Canada due to leadership

T-statistic

- the test statistic used in a t test. High ones of these indicate that the b1 we observe would be quite unlikely if b1=0. Calculate how many standard errors away you are by calculating by (the number of standard errors your estimate is from zero) by dividing coefficient by the standard error. Formula is beta1 hat/size of standard error. The things that fall outside of the two standard deviations

Counterfactual

- to determine casualty, contrast it with a world where you didn't do it. Comes up in politics all the time. If Hillary had won, this is how things would be different. Dems say look how bad things are when they could have been better. Repubs say this is really good, imagine how bad things would be if she was elected president. Over COVID, it is the dems saying 170,000 deaths could have been prevented by better actions by Trump. Repubs say look at all these things Trump did to keep these deaths from being more. But the truth is we don't really know. Dems can base their arguments on results from other countries, how fast Trump reacted. Can never observe this, but can approximate. Imagine a world where everything was the same except for one thing. Important to social science because we need to understand that there are many casual factors - for comparing our COVID response to Canada, maybe it's not just leadership because the US could have an older population with more obesity and heart disease

Categorical variable region

1,2,34 - how we would handle that in a regression? We would make dummy variable for each region - what's the effect of the region. One of the groups always has to be omitted, but it doesn't matter which one and it will give you the exact same reasons

Instrumental variable

A part of 2sls. explains the endogenous independent variable of interest but does not directly explain the dependent variable. When we use these, we focus on changes in y due to the changes in x that are attributable to changes in this. Major challenges associated with these include: it is often hard to find an appropriate one that is exogenous, estimates based on these are often imprecise. When we use this, its difficult to find this that definitely satisfies the exclusion condition. Trying to find how to strip down our x variable to just the relationship that is exogenous. Asthma b0+b1csection+e. Lots of things in the error term that affect asthma - health, prenatal care, genetics, pregnancy factors, income, race, etc). Can we predict whether a mother gives birth via-section? Check for something that predicts c-sections that's not via the error term. One example is whether a mother knowing her doctors predicts c-sections. Reasons for c-section - look at children who have asthma who had c-section via random chance - call this variation in this variable - the variable predicting exogenous variation in x. Instrument is mom knows doctor. Used to shave off variations of c-sections to predict asthma. Everyone who didn't know the doctor got a0 and everyone who didn't know the doctor got a1. Equation 2: asthma =b0+b1c-section+e. B1 will give the casual estimate of getting asthma due to a c-section - an average affect. Not as precisely estimated but will give us a casual estimate. Earnings=b0+b1education+e. Instrument: month of birth-quarter of birth. Stage 1: predict education based solely on quarter of birth. Education=q0+qqofb. Education = q0+qXqofb. Stage 2: earnings=b0+b1education+e. Use education hat as new x variable - education we think they have based solely on the quarter they were born in. Scrapping off some of the exogenous variation in x. Must be correlated with x and only correlated with x through y. Produces imprecise estimates. If we think our ols estimate is biased, we can trade off some of the precision for less bias. Use these because we know the x variable has a ton of variation - idea of them is shaving off a bit off of x in the error term and seeing it how it affects things. Must only be correlated through to earnings through education. Dad's education is not a great instrument because its in the error term. Its not your own education. Why they work - all about finding variation in x that we are confident that is not related to anything in the error term. It's a great strategy but its very hard to find a good instrument because its hard to make a case that something is exogenous. Less precise than ols, all else equal. Weak ones (ie instruments that are only weakly correlated with x) might actually make the estimates we worse than not using one at all. There isnt a great way to test for bias usually. All along we have been talking about endogeneity. The problem with endogeneity is that it produces a biased estimate of the casual effect we want to measure. Theres no really way to test whether our estimates are biased or not. We have to rely on what we know about what could be in the error term. The one possible exception to this is what we have discussed in class in the case of instrumental variables. If you are confident you have a good instrument, you contest whether the ordinary least squares (the normal multivariate regression we do without instrumental variables) estimate is biased by comparing the coefficient you get from ols with the coefficient you get from instrumental variables. If they are quite different, you will know that the ols estimate was biased. Again, this is only if you are confident that you have a good instrument. Uses multivariate regression but is able to isolate just the exogenous variation our independent variable to identify casual impacts. Relies on strong assumptions about the validity of the instrument. Meaningfully correlated with our endogenous dependent variable. Otherwise uncorrelated with our outcomes. When we find a good instrument, endogeneity is dead. The problem is we don't ever know whether the instrument is really good because we cant test the last assumption. Why they create less precise estimates - the thing we have focused on in this class is that they use only some of the variation in our x variable. With less variation, we have less ability to make precise estimates of casual effect. Whether your time spent on social media predicts whether you vote - vote=b0+b1socialmedia+e. We could run that regression and there would be a lot off variation in our social media variable - some people would have 10 hours per day, others would have 10 minutes per day, and everywhere in between. That would allow us to produce a precise estimate of b1, but it would be biased because social media is endogenous. If we found a great instrument that predicted social media use but not whether you vote or not, we could use this. Suppose whether you own an iphone is an instrument that works. In the first stage, we predict social media=d0+d1iphone+u. When we actually predict social media hat, its only going to take on two possible values. People who don't own an iphone will get d0 and people who own one will get d0+d1. Now we have very little variation in our social media hat variable since it only takes two values in the second stage regression. Voting=b0+b1socialmedia-hat+e. In this regression, we'll get a less precise estimate of b1, but at least it won't be biased assuming our instrument is actually a valid instrument.

Random selection

Also called random sample. used to create a sample that resembles the population. I am interested in the effect of taking ap classes on your sat tests. Take one of these of all high schoolers in the US - surveyed them and ask how many ap classes did you take and what was your score - clt says 1/3 average will be closer to the true mean - if you get a score of 5 around the same number of ap classes. The sample you have for your study is randomly chosen from the population. It's the way we get external validity. It doesn't necessarily mean your estimates are going to be casual (internally valid) but they would be representative of the larger population thanks to the central limit theorem. Helps us with generalization

Random sample

Also called random selection. I am interested in the effect of taking ap classes on your sat tests. Take one of these of all high schoolers in the US - surveyed them and ask how many ap classes did you take and what was your score - clt says 1/3 average will be closer to the true mean - if you get a score of 5 around the same number of ap classes. The sample you have for your study is randomly chosen from the population. It's the way we get external validity. It doesn't necessarily mean your estimates are going to be casual (internally valid) but they would be representative of the larger population thanks to the central limit theorem. Helps us with generalization

One Sided Alternative Hypothesis

An alternative to the null hypothesis that has a direction. We choose this hypothesis if theory suggests ether b1>0 or b1<0.

Two Sided Alternative Hypothesis

An alternative to the null hypothesis that indicates that the coefficient is not equal to 0 or some other specified value. We reject the null hypothesis if beta s greater than the critical value. Otherwise, we fail to reject the null hypothesis. We don't always know which way an effect goes, which often makes things this type of test. Tend to use this in case there is a negative association

Endogenous

An independent variable is this if changes in it are related to other factors that influence the independent variable. Our central challenge is to avoid this and achieve exogeneity. Have this when the unmeasured stuff that makes up the error term is correlated with our independent variable, which will make it difficult to tell whether changes in the dependent variable are caused by our variables with the error term. Factors into counterfactual because it can include other facts in the counterfactual. The problem with endogeneity is that it produces a biased estimate of the casual effect we want to measure. Worried about it when trying to find the casual relationship between two things. Never know if something has solved this because we can know that something has solved some, but not all

Exogenous

An independent variable is this if changes in it are unrelated to the factors in the error term. Out central challenge is to avoid endogeneity and achieve this. If we succeed, we can be more confident that we move beyond correlation and closer to understanding if x causes y - our fundamental goal. If we can find this variation in x, we can use the data to make a reasonable inference about what will happen to variable y if we change variable x. Best way to fight endogeneity is to have this. The best way to have this is to create it.

Dummy Variable

Equals either 0 or 1 for all observations. Variables are sometimes are referred to as dichotomous variables. The standard error on this coefficient tells us how precisely we have measured this difference and allows us to conduct a hypothesis test and determine a confidence interval. To use these to control for categorical variables, we include them for every category except 1. Advantage of them is that they allow us to test whether one group is statistically different from another group. The constant is the mean for the group where the dummy = 0. The coefficient on the dummy indicates how much more/less the group is for whom the dummy = 1. Running a regression with these variables is the same as conducting a t-test but it allows you to control for other things too. When the outcome is this, the coefficients are interpreted as percentage points increases or decreases in the likelihood of observing the outcome. Allow us to tell the difference between groups because you can compare them. Called a linear probability model when being used for experiments. It's sort of okay (there are better models to deal with binary dependent variables, but its usually enough). Allow you to categorize your observations into mutually exclusive categories that make a bunch of things possible. Allows you to test whether there are meaningful differences between groups. Use percentage points for variables

Intent to treat estimate

ITT. This is the effect of being offered the treatment and not the effect of the treatment itself. What is the effect of having WIA made available to you (instead of what is the effect of participating in WIA's programs? Addresses potential endogeneity that arises in experiments having non-compliance. Compare the means of those assigned the treatment and those not assigned the treatment, irrespective of whether that subject did or did not actually receive the treatment. When there is non-compliance, it will underestimate the treatment affect. Estimates will be smaller in magnitude than the treatment effect. The more numerous the non-compliants, the close to zero will be the ITT estimates.

Substantive significance

If a reasonable change in the independent variable is associated with a meaningful change in the dependent variable, the effect is substantively significant. Same statistically significant effect are not substantively significant, especially for large data sets.A coefficient is statistically significant

Control group

In an experiment, the group that does not receive the treatment on interest

Critical value

In hypothesis testing, a value above which a beta hat would be so unlikely that we reject the null. For t tests, depends on the t-distribution and identifies the point at which we decide the observed beta is unlikely enough under the null hypothesis to justify rejecting the null hypothesis

LSPV Approach

Least squares dummy variable. An approach to estimating fixed effects models in the analysis of panel data. There are two ways to produce identical fixed effects coefficient estimates for the model: In this approach, include dummy variables for each unit except an excluded reference category.

Compliance

One of the problems with rcts. What if people who are assigned to the treatment status don't really want to do what's part of the treatment? some people who are assigned to treatment might choose not to participate. We can't just compare those who participated with the comparison group. Not just everyone is gong to do it. Can't force people tho. Can stack people from most motivated to last in both control and treatment group. Can use intent to treat to help with this. The condition of subjects receiving the experimental treatment group to which they were assigned. 2sls is useful for analyzing experiments when there is imperfect this with the experimental group. If you are conducting an experiment, would you rather have an attrition problem or a this problem? Rather have this because you can use the ITT and say that's part of the treatment. Attrition harder because you never really know if you have a biased answer

Balance

One of the problems with rcts. how do we make sure people in the treatment group look just like people in the control group? Chance differences aren't too big of a concern as long as we are convinced they were truly random. Test for balance by examining all available pre-intervention characteristics. Can use balance table before to check for this. The solution is blocking. Example - the idea is that we set people into groups before hand and then randomly assign within groups as a way of making sure our treatment and control groups look the same. For example, we might think that to make sure we get good balance on education. We first sort people into education categories (college educated and less than college educated) and then randomly assign within these groups. Lets say we have 100 people we are randomizing, and 42 of them are college educated and 58 are not. We would first randomly assign half of the college educated people to the treatment group and the control and then we would randomly assign the less than college educated people in the same way. We will have exactly the same proportion of college educated people in the treatment group as we do in the control group. If we didn't do any attempt at balancing, we run the risk of just having a greater share of educated people in the treatment group or in the control group. Treatment and control groups are this if the distributions of control variables are the same treatment and control variables. A way to check for is to conduct a difference of means test for all possible independent variables. When we assess the effect of a treatment, it's a good idea to control for imbalanced variables

Attrition

One of the problems with rcts. what do we do if people drop out? Occurs when people dropout of an experiment so that we don't observe the dependent variable for them. Used trimmed data set to address this. Non-random of this can cause endogeneity even when the treatment was randomly assigned. We can detect problematic of this by looking for differences in this values across treated and control groups. Can be addressed by using multivariate ols, trimmed data sets, or selection models. Sometimes people drop out of the study and you can't observe their outcomes. It is not random usually, so could be missing observations that could be related to outcomes in important ways. Solution - no perfect solutions exist. There are some methods of dealing with missing data but all of them rely on assignment that could be due to biased estimates. Can use incentives and aggressively getting to commit early to offset this.

Randomized Control Trials

The gold standard because using random assignment for the treatment gets around the fundamental problem of endogeneity of treatment. Breaks the link between the treatment and the error term because you are deciding it and making it random sometimes the treatment cannot be randomized (eg physical traits), sometimes treatment should not be randomized (eg education levels, parents love), sometimes treatment is otherwise difficult to pull off because of political or practical constraints (might not be feasible to not give a desirable treatment to some people). Do them if you can because they give you greater power than other things such as rd. Very powerful and a little more flexible than an rd. If we are able to randomly assign people to be in the treatment or the control condition, we can be confident we have broken the relationship between treatment and endogeneity in the error term. People randomly assigned to the neighborhood instead of socioeconomic reasons. This means stuff in the error term is more randomly assigned. Helps to eliminate endogeneity and make casual claims - doesn't help with external validity because its only for this specific situation where people are randomly assigned.

Bias

There isnt a great way to test for this usually. All along we have been talking about endogeneity. The problem with endogeneity is that it produces a biased estimate of the casual effect we want to measure. Theres no really way to test whether our estimates are biased or not. We have to rely on what we know about what could be in the error term. The one possible exception to this is what we have discussed in class in the case of instrumental variables. If you are confident you have a good instrument, you contest whether the ordinary least squares (the normal multivariate regression we do without instrumental variables) estimate is biased by comparing the coefficient you get from ols with the coefficient you get from instrumental variables. If they are quite different, you will know that the ols estimate was biased. Again, this is only if you are confident that you have a good instrument.

Pooled Model

Treats all observations as independent observations. Are biased only when fixed effects are correlated with the independent variable. Simply means we're not trying to do anything tricky. We're treating every observation as a separate observation unrelated to any other and just throwing them all together to see if the observations with oncycle are different from the don't have oncycle.

Modeled Randomness

Variation attributable to inherent variation in the data generation process. This source of randomness exists even when we observe data for an entire population.

Sampling Randomness

Variation in estimates that is seen in a subset of an entire population. If a give sample had a different selection of people, we would have a different estimated coefficient.

Power calculations

We will never if we will get statistically significant findings ahead of time. Unfortunately, it's part of the adventure - like treasure hunting. We can increase our chances of the true beta being different from zero if we increase our sample size. The exact amount we need to increase our sample size depends on a lot of factors including how big we think the beta is other things being equal, if we think the true beta is quite large, we don't have to increase the sample size as much as we would if we thought the true beta is pretty small.

Normal distribution

a bell shaped probability density that characterizes the probability of observing outcomes for a normally distributed random variables

T distribution

a distribution that looks like a normal distribution, but with fatter tails. The exact shape of the distribution depends on the degrees of freedom. This distribution converges to a normal distribution for large sample sizes.

Probability density

a graph or formula that describes the relative probability that a random variable is near a specified value

Probability Distribution

a graph or formula that gives the probability for each possible value or a random variable.

Null hypothesis

a hypothesis of no effect. We reject this when the statistical evidence is inconsistent with the it. A coefficient estimate is statistically significant if we reject this, that coefficient is zero. We fail to reject this when the statistical evidence is consistent with this. Height has no effect on wage for example. Mistakenly reject this 5% of the time due to us using the 95% confidence interval. Going to reject it if my number is two standard errors away from 0. Test against this, not the true beta. Just gives you a lot of evidence that the true beta is not 0. Means no association\. Just dealing with 0 or not

T-test

a hypothesis test for hypothesis about a normal random variable with an estimated error. The most common test we use for hypothesis testing. The quick rule for them is if the absolute value of b1 is bigger than 2, we reject the null hypothesis. If not, don't. The steps for using it to reject the null hypothesis: 1. Choose a one sided or two sided alternative hypothesis. 2. Set the significance level at .05. 3. Find a critical value based on the t-distribution. This value depends on a, whether the alternative hypothesis is one sided or two sided, and the degrees of freedom equal to sample size minus number of parameters estimated. 4. Use OLS to estimate parameters

Type II error

a hypothesis testing error that occurs when we fail to reject a null hypothesis that is in fact false. Happens because it is possible to observe values of B1 that are less than the critical value of b1 (the true value of the parameters) is greater than zero. This is more likely to happen when the standard error of b1 is high. False negative. These are more troublesome. Take false positives 50% of the time. An estimate that would be somewhere around 1 standard error. Large area on the slope thats shaded. Are often bigger than II. When the true beta is not zero and you fail to reject the null hypothesis (you mistakenly think you don't have enough evidence to rule out that it is zero). The problem is that we rarely know what type of error we are committing because we never really know if the true beta is zero or not. If the true beta is not zero, we don't know how often we're going to get it wrong. II depends on the size of the true beta and the standard error. Draw distribution that shows the true state of the world and do everything to the left of that - thats the chance of making this type of error - this chance is always closer to zero and inside of the critical value. Make reasonable guess for the percentage of making this type of error. True beta is not zero when you make this type of error.

Variance

a measure of how much a random variable varies. The factors influence the estimation of this for B1 hat: 1. Model fit - the variance of regression ,o2, is a measure of how well the model explains variation in y. 2. The sample size. 3. Variance in x. The more x varies, the lower will be var bl.

Dummy variable approach

a method of doing fixed effects. Put a dummy variable in for each group in our data. One must be the comparison group that makes up the intercept.

Fixed Effect

a model that controls for unit specific effects captured in differences in the dependent variable associated with each unit. Allows us to control for any factor that is fixed within a unit for the entire panel, regardless of whether we observe this factor. There are two ways to produce identical these coefficient estimates for the model: In the LSVP approach, include dummy variables for each unit except an excluded reference category. In the demeaned approach, we transform the data such that the dependent variable and independent indicate deviations from the unit mean. Can not estimate coefficients on variables that do not vary by at least some units. Do not control for these effects though, as they are subsumed within the unit specific one of these. Use dummy variables. Really just a special way of accounting for things in the error term that we can't easily measure but are shared in common by members of a group. One of the ways that we can fight endogeneity in our x variable is to control for things that are in the error term but also remember that even if it doesn't solve endogeneity, the more stuff we can pull out of the error term, the more we can improve the precision of the beta that we care about because it does a better job of predicting our outcome so in both cases, the better off we are. Suppose we're interested in knowing whether taking ap courses helps you to do better in college. The outcome variable would be something like college GPA and then we would see whether the number of ap courses you took was statistically related to that. However, there could be endogenous stuff related to the # of ap classes taken. Taking this stuff improves the casual estimate of b1. Hard to measure this, which is why this comes in. Limitations: 1. Data must include multiple people from the same group - making comparisons of kids from the same high school - doesn't just have to have one kid from the west school. 2. The x variable we care about must vary within groups - if everyone took 4 courses we can't test the comparison. When we use them: 1. When we have multiple observations per group, 2. We have variation within the group on the variation we care about. 3. All members of the group have the same factor in the error term. Focus on differences within a group. Solves endogeneity if everything is being captured in the error term that is correlated with police and is constant over time. Nothing has changed that would affect the number of police and crime. Advantages and disadvantages - If you try to predict a math score based on abstract stuff, you get like 5% of the variance. 7.5% with this cause it does such a great job of including other stuff. Gets rid of differences between groups but only focus on within groups. Easy to do because you don't have to do a bunch of dummy variables and controls for a bunch of stuff. Just one type of control variable. It's a variable that captures everything that a group shares in common and lumps it all together in a black box estimate. For example, if we are using city these it lumps together all of the stuff people in each city share in common (the political atmosphere, city leadership, the laws and regulation, historical context, etc) and controls for it all at once. A type of multivariate regression. Example of a group that could be used for fixed effects - school or city - multiple observations per group and lots stuff in common. Don't want to include a group that will hurt our chance of estimation of beta - if a group doesn't have a lot in common, that doesn't help us. What does it mean that you can't estimate the effect of things that don't vary within groups? Isn't that the point of these? Including these takes care of what people have in common. We don't wany them to share the x variable that we care about - if measuring education, want all the siblings in family to have different levels of education. A technique of pulling common factors out of the error term to reduce endogeneity of the variable we care about and (ie reduce bias in the estimated beta) improve the precision of the estimated beta we care about. These use the fact that people within a group share many factors in the error term in common. When we include these, we are pulling common factors out of the error term. The estimated betas for them (ie the coefficients on the dummy variable we include in our regression that indicate group membership) include an estimate that lumps together the total effect of everything that group shares in common. For example, suppose you had data from people living in VA, Maryland, and WVA, and you ran the regression Earnings=b0+b1education+b2virginia+b3WVA+e. The coefficient b2 would represent now much more people from Va each compared with people in Maryland (our omitted group). Why do they earn that much more? Everything that VA's share in common - labor markets, cost of living, tax rates, etc). How we use dummy variables for these - the basic idea of the dummy variable method for these is that we include a dummy variable for every group that is in our sample. For example, if our sample includes people from VA, Maryland, and WV we would have 3 dummy variables. VA would equal 1 for every one from VA and 0 for everyone else, and so on for the other two states. Our regression however would only include two of these because its somebody has to be omitted. If we include those dummies in our regression, the coefficients we get will be the estimated of these. Focuses on controlling for things within groups that don't change over time (like state, family, etc). Just a special application of multivariate regression where we account for all the stuff that is shared within the group without actually having to measure that stuff. Can be used when we have reported observations within the same group. - panel data with one observation per person per year for several years and grouped data where we have multiple observations per group eg lost of uva students. The estimate relies on within group variation. Participation in sports impacts the achievement. Find group that has the same stuff about them is still variation within that group. True/false - this model always increases precision and decreases external validity - always increase precision due to taking things out of the error term. Might not increase precision due to collinearity if we have two variables that are very similar. Does not decrease external validity. When looking at crime and police, we are taking out stuff from the cities to make it more generalizable to the entire population

Difference in Differences

a model that looks at differences in changes in treated units compared to untreated groups. A basic estimator for this is the change in the dependent variable is the change in the dependent variable for the control unit. The fixed effects in this capture differences in units that existed before and after treatment and captures differences common to all units in each time period. Just a specific context in which interaction variables are used. Does not use the demeaned approach. Focuses on controlling for both time (your pre-post variable). How you analyze the coefficients - approach this interpretation the same way that we interpreted interaction terms. Try to determine what each variable in the equation represents by imagining each other variable was 0 (ie if the interaction term and treatment variable was 0, under what conditions would that occur and what would the pre/post variable be indicating. Can be used when we have pre and post data on a group of observations that received a treatment and a group that did not get treatment. Just a special application of fixed affects (which is a special application of multivariate regression) that exploits the difference in changes over time. Use to determine the difference in Batten students in gratefulness between those who did and didn't attend franksgiving in the pre and post period. Control for time so that you can measure for the treatment. Use people within a group who did and did not receive a treatment to see the affecrt of the treatment by looking at the pre and post period for both groups.

Jitter

a process used in a scatterplotting data. A small random number is added to each observation for purposes of plotting only. This procedure produces cloudlike images, which overlap less than the unjittered data, hence providing a better sense of the data, hence providing a better sense of the data. you use it to make scatter plots a little more visually appealing. The problem happens when you have a lot of data points that all fall on exactly the same point on a scatter plot. It's hard to see how many are actually falling at that point so you can "jitter" the observations a little bit. It basically draws them in a little cluster around the point rather than putting all of them on top of each other. Think of it like spreading out your cards instead of putting them all in a stacked deck.

Generalizable

a statistical result is this if it applies to populations beyond the sample in the analysis.

Difference of means test

a test that involves comparing the mean of y for one group (eg the treatment group, against the mean of y for an other group eg. the control group). Whether the average value of the dependent variable differs between two groups. Can be done by using a dummy independent

Ordinal variable

a variable that expresses rank but not necessarily relative size

Selection model

a way to address attrition. Simultaneously accounts for whether we observe the dependent variable and what the dependent variable is

Multivariate OLS

allows us to control multiple independent variables at once. The way we interpret a regression like this is slightly different from how we interpret bivariate ols regressions. We still say that a one unit increase in X is associated with an increase in Y, but now we need to add "holding constant the other factors in the model." We therefore interpret our regressions "controlling for the December shopping boost," increases in temperature are associated with more shopping. Fights endogeneity by pulling variables from the error term into the estimated equation. As with bivariate ols, the estimation process selects B in a way that minimizes the sum of squared residuals. Including a dummy interaction in this allows us to conduct a differences of means test while controlling for other factors with a model. The fitted values will be two lines. All about taking things out of the error term and trying to explain for it. If we take it out and control for it, we can solve for endogeneity. Why we run this - to potentially reduce endogeneity, if there are other factors in the error term that influence our outcome and we can account for them, we can make our x variable of interest exogenous. To improve the precision of our estimates. If we include more stuff that accounts for the variation in our outcome variable, we can be more precise in our estimate of the beta we care about (think of the standard error formula). Usually one coefficient we care about and we want to make sure it is as precise as possible. Precision of our estimates is what we care about and adding stuff improves their precision. The variance of x is really just the variance of b. As we get more stuff, we get a better prediction of what our y is and the numerator shifts, which is the measure of the error term and how much it varies. A lot of stuff that can cause you to get asthma - all these things might help better predict it so if you include them, you can get a better estimate of y. Still value in including stuff that correlates with x because of this. Not changing the biasedness but is more precise by adding the other variables. Other variables allow us to improve the precision of B1 and allows us to get unbiased. There really isn't a downside of including more stuff. Even if something isn't in the error term and you include it, it won't mess up your estimates. If you have a very small sample size, it can have a negative affect on your precision. The bigger reason is that you have to think exactly about what you are going to estimate. Interpret in the same way as univariate regression. The only difference is that when interpreting the coefficient, we are looking at effect of that variable holding all other things that are included in the regression. Some cautions: Are mostly about one independent variable. The others are included to improve our independence of that one. Always keep the interpretation clear: the beta we care about is the effect holding constant the other things in the model. Fixed effects is a type of this. Used to try to account for stuff in the error term that could be coordinated with our variance of interest. We can interpret the estimates as casual only if we never include everything from the error term that is correlated with the variable of interest. We can interpret the estimates as casual only if we never include everything from the error term that is correlated with the variable of interest. Its often difficult to convince ourselves we accounted for everything because some of the everything is probably unmeasurable.

Alternative hypothesis

an alternative to the null hypothesis that indicates the coefficient is not e. What we accept if we reject the null. Height does have an effect on wage for example. Just dealing with 0 or not

Control Variable

an independent variable included in a statistical model to control for some factor that's not the primary factor of interest. Fixed effects is just one type of these. It's a variable that captures everything that a group shares in common and lumps it all together in a black box estimate. For example, if we are using city fixed affects it lumps together all of the stuff people in each city share in common (the political atmosphere, city leadership, the laws and regulation, historical context, etc) and controls for it all at once

Outlier

an observation that is extremely different from those in the rest of the sample. When sample sizes are small, a single of these can exert considerable influence on coefficient estimates. When a single observation substantially influences coefficient estimates, we should inform readers of the issue, report results with and without the influential observation, and justify including or excluding that observation

Demeaned Approach

approach to estimating fixed effects involving subtracting average values within units for all variables. There are two ways to produce identical fixed effects coefficient estimates for the model: In this approach, we transform the data such that the dependent variable and independent indicate deviations from the unit mean. Create a new independent variable. Police officers that is equal to the current years police force minus the average police force for that city and a new dependent variable (Crime) that is equal to the current year minus the average crime of that city. Demeaning for fixed affects - within the group, measures the difference between your usual numbers. Looking at how different it is from on average. This approach allows each city to have its intercept, kind of like we are running a regression for each city. The key to it is that you convert all your variables so they reflect the amount above or below the mean for that group. Suppose you have 5 months of data on every person on their produce expenditures and prepared food expenditures. Let's just look at 1 person: June - $50 produced, $75 prepared, July - $75, $75, August - $100, $30, September - $90, $10, October - $50, $50. Takes the values for every person and transforms them into how different they are from their average expenditure on that category. The first task is to find the mean of each variable. This person's mean is $73 and $62. This takes each of their monthly expenditures and translates them into the difference from the mean. Now, their demeaned values are: June - 23,13, July - 2, 13, August - 27, -32, September - 17, 18, October - -23, -12. The interpretation of the first month is that in June, person 1 spent $23 below their average produce expenditure but spent 13 above their average prepared food expenditure. If you look at each of these new demeaned variables, they average to zero. The advantage of this approach is that person 1 and person 2 might have very different incomes and so person 2 expenditures might always be higher than person 1. We want to get rid of that overall difference in income and just focus on the relative expenditures of produced and prepared food. We take the average for each group and then subtract each individuals outcomes and x variables from their group average. This results in variables that measure how for away each person's from the average of their group. For example if we did something like Finalexamscore=b0+b1hoursstudying+e and we wanted to include fixed effects for which section you are in, we would change each person's final exam score to the difference between their final exam score and the average final exam score and the average final exam score in their section and we would change hours studying to be the difference between how much they studied versus other people in their country. Let's say someone scored an 80 on the final exam and they spent 100 hours studying. If their section had an average exam score of 85 and the average person in their section studied for 200 hours, those variables would be changed. Now the demeaned final exam score would be -5 (since their score was 5 points below the average for their section) and hours studying - demeaned would be -100 (since they studied 100 hours less than the average in their section)

Replication Standard

at the heart of scientific knowledge. Research that meets this can be duplicated based on the information provided at the time of duplication

Omitted Variable Bias

bias that results from leaving out a variable that affects the dependent variable and is correlated with the independent variable

Mean centered

center many graphs on zero. Does it to show how far away from the mean you are - zero is the mean

Power curve

characterizes the probability of rejecting the null for each possible value of the parameter

Confidence interval

defines the range of two values that are consistent with the observed coefficient estimate. Depends on the point estimate and the measure of uncertainty. Are closely related to hypothesis tests. Because they tell us the range of possible values that are consistent with what were seen, we simply need to note whether the interval on our estimates includes zero. If it does not, zero was not a value that would likely produce the data and estimates we observe, we can therefore reject. Do more than hypothesis tests because they provide information on the likely location of the true value. If it is mostly positive, but just barely covers zero, we would fail to reject the null hypothesis. We would also recognize that the evidence suggests that the true value is likely positive. If the interval does not cover zero but is restricted to a region a of substantively unimpressive values b1, we can conclude that while the coefficient is statistically different from zero, it seems unlikely that the true value is substantively important. The lower bound of a 95% interval like this will be a value of b, such that it is less than a 2.5% probability of observing beta hat, as high as the beta hat actually observed. The upper bound of a 95% interval will be a value of b, such that there is less than a 2.5% probability of observing a beta hat as low as the b1 actually observed is calculated as b. Tells us the range of values of b1 that are most likely based on your b1. 2 standard deviations away rule. If the true beta is zero, then 95% of the estimates will be between .2 and -.2. Mistakenly reject the null hypothesis 5% of the time using the 95% one of these. Find 95% interval by estimating two standard deviations to each side

Standard deviation

describes the spread of the data. How spread out individual data points are from the mean. Talking about distribution in one sample

Assignment variable

determines whether someone receives same treatment. People with values of the this above some cut off receive the treatment; people with values of this below the cutoff do not receive it. The less people know about it the better because than people can manipulate where they are around the line. Forcing variable allows us to see how close we are to this. The basic idea is first of all there's no one of these where you can clearly say, this is exactly the point when you start to receive the treatment and the arbitrary part is that people can easily manipulate how many hours of sleep they get - some people will choose to sleep slightly less. If there were a treatment available at exactly 8 hours, we would expect people who are interested in getting the treatment to get exactly 8 hours of sleep

Random assignment

doing this for the treatment gets around the fundamental problem of endogeneity of treatment. Breaks the link between the treatment and the error term because you are deciding it and making it random - why randomized control trials are the gold standard. Take 1000 kids who are 9th graders and randomly assign 500 to take ap courses and 500 to not from Fairfax. Can be confident that the kids who took the ap courses did better - confident you established a casual relationship. Won't have external validity because fairfax county doesn't not represent the whole population. Helps with internal validity and establishing a casual relationship. You take your sample (however you got it) and randomly decide who gets a treatment and who does not. Its one way to get internal validity. It doesn't necessarily mean your estimates are going to be representative (externally valid) but they are going to be casual. Used to create groups that are similar to each other. Makes sure groups are the same except for the one thing that you choose. Randomly put people in the control and treatment group.

Uniform distribution

equal opportunity to get any of the numbers in a sample. If we roll a die, we assume the probability of getting a certain number is ⅙. This is most useful for discrete random variables. Gateway to the central limit theorem.

Epsilon Hat

error term messes up the dots on the scatter plot. The difference between your actual thing you are trying to measure such as height and what you think that thing should be based on the variable you are using to predict it. The error term of the sample you take. The sum of all the hats squared is the same as the residuals squared

Exclusion condition

for 2sls, a condition that the instrument exert a meaningful affect in the first stage equation in which the endogenous variable is the dependent variable. The instrumental variable must be related to the variable we are interested in. Can test this condition.

Inclusion Condition

for 2sls, a condition that the instrument exert a meaningful affect in the first stage equation in which the endogenous variable is the dependent variable. The instrumental variable must be related to the variable we are interested in. Can test this condition.

Significance level

for each hypothesis test, we set this that determines how unlikely a result has to be under the null hypothesis for us to reject the null hypothesis. It is the probability of committing a Type 1 error for a hypothesis test. Our level for this is .05. The level we set in the p-value

Cutoff

for regression discontinuity, a threshold above or below where an intervention is assigned. there must be a strict this in terms in whether you get the treatment or not. Its one of the limitations of this approach. There can be imperfect compliance around the this (eg some below the this don't get it) but the treatment itself cant be a phase out. We can allow the slopes to be different on either side of it. Use it to predict who gets the treatment. Fuzzy designs - if this doesn't get the perfectly predict who gets the treatment, we call it a fuzzy discontinuity. In the case of a fuzzy discontinuity, we can use the cutoff score as an instrumental variable that predicts the likelihood, albeit imperfectly, of receiving the treatment. Can be multiple - treat them as separate. Treatment has to be all or nothing at this. Any evidence of manipulation of scores around the this can violate our assumptions about exogeneity of treatment. We can allow the slopes to be different on either side of this. Look at either sides to see if there is a big drop off.

Forcing variable

for regression discontinuity, use the this to predict the relationship between this and outcome and see if there's a discontinuity at the point where people started getting the treatment. The variable that determines how the treatment is allocated. It it always a dummy variable - no cant be a dummy variable. It must be continuous because we need to be able to observe how close people were to the cutoff. For example, if the dean's list includes everyone above 3.4 gpa then this the gpa - it determines whether people get the treatment (which in this case is being on the dean's list). Isn't good if people can easily manipulate where they fall on that continuum. For example, people can choose to sleep 7.59 minutes. If there were a sudden treatment you received at exactly 8 hours of sleep, we would worry that the people who choose to get 8 hours of sleep are systemically different than people who do not. We don't have that same worry about GPA because its really hard to get exactly a 3.39 versus a 3.4

Categorical variables

has two or more categories but does not have an intrinsic ordering. Also known as a nominal variable

Positive correlation

high values of one variable are associated with high rates of the other

Goodness of fit

how well a model fits the data. If a model fits well, knowing x gives us a pretty good idea of what y will be. If the model fits poorly, knowing x doesn't give us a good idea of what y will be.

Fuzzy designs

if the cutoff doesn't get the perfectly predict who gets the treatment, we call it a fuzzy discontinuity. In the case of a fuzzy discontinuity, we can use the cutoff score as an instrumental variable that predicts the likelihood, albeit imperfectly, of receiving the treatment

Falsification Test

if we see something weird with data\

Treatment group

in an experiment, the group that does not receive the treatment of interest

Negative correlation

indicates that high values of the variable are associated with values of the other

Collinearity

means that two or more of the variables in the variable are highly correlated with each other. For example, if there was a perfect multicollinearity that would mean that all the information contained within one variable is already contained within another variable on the model. Can make fixed effects estimates less precise

Variance of the regression

measures how well the model explains variation in the dependent variable. Will be two standard deviations away from the true beta

Correlation

measures the extent to which variables are linearly related to each other.

Univariate regression

mostly used out of desperation. Helpful to find the differences in between two groups. Do low income students participate in after school sports at lower rates than high income students? If you don't have a strong casual study design experiment, you cannot interpret the estimates as casual. Participation=b0+b1lowincome+e

Continuous variable

the logic of distribution extends to them. A variable that takes on any possible value or some range

Discontinuity

occurs when the graph of a line makes a sudden jump up or down

Interaction Variables

often dummy variables. We use one of these because we want to measure outcomes based on two variables. Not only how they affected A, but what the difference is. Adding up the effect that the subgroup is having. How different folks in a population are experiencing effects of a treatment. Do folks who are stem majors have different outcomes from those who arent stem majors. Allow us to estimate affects that depend on more than one variable. B0 is always on in those box things. Including a dummy interaction in multivariate ols allows us to conduct a differences of means test while controlling for other factors with a model. The fitted values will be two lines. The coefficient indicates the difference in slopes between two groups. Use these to answer important questions about the differential effects of different programs or policies or whatever were looking at.

Trimmed data set

one for which observations are removed in a way that offsets that potential bias due to attrition. For example, paid kids who did not take the tests to come back and take the tests. If you are conducting an experiment, would you rather have a this problem or a compliance problem? Rather have compliance because you can use the ITT and say that's part of the treatment. This harder because you never really know if you have a biased answer

Randomness

one of the two fundamental challenges in statistical analysis should make us cautious. Any time we observe a relationship in data, we need to keep in mind that some coincidence could explain it. Can produce data that suggests the existence of a relationship between x and y even when there is none or suggest that there is no relationship when there is one

Point estimate

our best guess as to what the true value is

Regression discontinuity

rd techniques use regression analysis to identify possible discontinuities at the point at which some of the treatment applies. Use Assignment variable - determines whether someone receives same treatment. People with values of the this above some cut off receive the treatment; people with values of this below the cutoff do not receive it. Rd models require that the error term be continuous at the cutoff. The value of the error term must not jump up or down at the cutoff. Identifies a casual effect of treatment because the assignment variable soaks up the correlation of error and treatment. When we conduct rd analysis, it is useful to allow for a more flexible relationship between assignment variable and outcome. The conditions that have to be met: whether or not you receive a treatment is determined by some score or ranking, the score has to be continuous around the cutoff and has to be arbitrarily determined (ie not correlated with anything else), we have outcomes of people and below the cutoff. Does the cutoff for rg have to be strict - eg many welfare programs just cutoff the threshold - it's a gradual decline in benefits, would this still work? Yes, there must be a strict cutoff in terms in whether you get the treatment or not. Its one of the limitations of this approach. There can be imperfect compliance around the cutoff (eg some below the cutoff don't get it) but the treatment itself cant be a phase out. Treatment must be assigned by some continuous forcing variable. Use the forcing variable to predict the relationship between forcing variable and outcome and see if there's a discontinuity where people started getting the treatment. Fuzzy designs - if the cutoff doesn't get the perfectly predict who gets the treatment, we call it a fuzzy discontinuity. In the case of a fuzzy discontinuity, we can use the cutoff score as an instrumental variable that predicts the likelihood, albeit imperfectly, of receiving the treatment. Treatment has to be all or nothing at the cutoff. Limited external validity: the estimated effect on y applies to those around the cutoff. Any evidence of manipulation of scores around the cutoff can violate our assumptions about exogeneity of treatment. Has less statistical power than rct (it will be less precisely estimated). It is often very sensitive to assumptions about the shape of the lines of best fit and how far from the cutoff observations can be that we include in our estimates. We can allow the slopes to be different on either side of the cutoff. Look at either sides to see if there is a big drop off. The less people know about the cutoff the better because than people can manipulate where they are around the line. Is powerful, but it can only be used under very specific circumstances. The treatment must have been allocated by some continuous measure and you have that measure and the outcome for people on both sides of the cutoff. Eg people who scored above a certain level on their sat score received a scholarship and people just below the cutoff did not you have the sat scores from both groups and you know whether they finished college within 5 years.

Standard Error

refers to the accuracy of a paramative estimate, which is determined by the width of the distribution of the parameter estimate. The estimate of how spread out our estimates are. The smaller it is, the closer we are to the actual beta. How far spread out are estimates are if we repeat the sample over and over again. Tells us how wide the estimation is. How accurately we are estimating our beta based on the samples we are collecting. The central limit theorem tells us that the value we get from our samples belongs to the distribution possible estimates of the true value, the distribution of estimates will be about three standard errors wide on either side of the true beta, the standard error we get from our sample is a pretty solid approximation for the standard error we would get from any sample. Decrease by the inverse of the square of two. How a narrower distribution of observations leads to a smaller one - this one is a little tricky depending on what you mean by a narrower distribution of observations. If you're talking about a smaller range in the x variable, that produces a smaller varx in the denominator of the standard error formula. That leads to a smaller one because it is a smaller number in the denominator. S narrower distribution of y values results in a smaller standard deviation if y (the numerator in the se formula). One we get from our sample is a good approximation of the one we would get from any sample. We have some control over this because we know that we can increase our sample. If it is one, none of my estimates of the true beta are going to be within two or three of these of the right answer. But I also know that the estimates if it were zero would be clustered around 0. Talking about distribution of samples. Formula is sd/square root of the sample size. Measures deviations away from zero

Balance table

use before experiment to check for balance.

Covariable

something you want to check for balance against like race or gender. Example - prior to random assignment, sort people by gender. Randomly assign half the men to participate and randomly assign half the women to participate. The resulting treatment and control groups will have exactly the same relative shares of men and women. Picking treatment and control groups so that they are these in blocking

Robust

statistical results are this if they do not change when the model changes. Not very sensitive to different assumptions we make. We always have to make assumptions but if assumptions don't change the answer, then the answer is this. We do not want answers that vary wildly depending on the assumptions. We want these answers instead of Y dependent variables/ In a casual sense, it depends on x. X causes y to happen.

Causality

statistics cannot determine it. Can only determine through study. To determine it, contrast it with a world where you didn't do it. Just because something is casual doesn't mean it causes all the outcomes. Just because something is casual doesn't mean it is the only thing that is casual.

Random Variable

takes on values in a range and within the probabilities defined by a distribution

Power

the ability of our data to reject the null. A high powered stats test will reject the null with a very high probability when the null is false; a lower powered statistical test will reject the null with a low probability when the null is false. Particularly important to discuss this in the presentation of null results that fail to reject the null hypothesis. When determining this, we want as many of our estimated betas as possible to be at least two standards from zero. The greater the percentages of our estimates that would be at least 2 standard errors from zero, the greater this we have. The ability to appropriately detect a true beta that is not equal to zero. Estimating based on some basic understanding of the distribution: should know that 50% of the estimates of beta fall above the true beta, 50% fall below, 2.5% of beta estimates are more than two standard errors away from zero in each direction. Probability of r-getting null when b does not equal 0. The ability that we have to make the correct inference when the beta is not 0. Increases as you decrease standard error. Chances that you'll get something statistically significant based on your errors.

Slope Coefficient

the coefficient in an independent variable. It reflects how much the dependent variable increases when the independent variable increases by one. How much y is going up as x increases by 1 unit

Residual

the difference between the fitted value and the observed value. The estimated counterpart to the error. It is the part of yi not explained by beta hat and beta one hat. If our coefficient estimates exactly equated the true values, then the residual would be the error, in reality of course, our estimates B0 and B1 will not equal the true values b0 and b1, meaning our residuals will differ from the error in the true model. An individual distance between a predicted point and the line of the best hat. The sum of squared residuals is the sum of the epsilon hats squared. The differences between the dot (sample mean) and the line (true mean)

Interaction term

the difference in differences. Difference over time between the group that received the treatment and the group that did not. What this means within a regression: In general terms, allows us to see how much more a group is different than another group in some way. In the context of differences in differences, it is a measure of how much more the treatment group changed (pre-post) than the control group changed (pre-post). In the regression voterturnout=bo+b1intervention+b2post+b3intervention_post. The coefficient on the interaction is the additional amount the intervention counties changed in the post period compared with the amount the non-intervention counties changed in the post period. To see this, note that voter turnout is expected to go up by b2 in non-intervention counties and it is expected to go up by b2+b3 in the intervention counties. We are interested in whether b3 is something other than zero. It its zero, that means they changed by some amount.

Regression Line

the fitted line from a regression

B0

the intercept. Indicates where the regression crosses the y-axis. It is the value of y when x is zero. is the intercept that indicates the value of y when x=0 and Bi is the slope that indicates how much change in y is expected if x increases by 1 unit. We almost always care a lot about B, which characterizes the relationship between x and y. We usually don't care a whole lot about B0. It plays an important role in helping us get the line in the right place, but determining the value of y when x is zero is seldom our core interest research. Have to have it mathematically that makes sense.

Alpha

the likelihood of committing a type 1 error. It is unchanged by sample size

Central Limit Theorem

the mean of a sufficiently large number of independent draws from any distribution will be normally distributed. Implies the Beta hat and B1 coefficients will be normally distributed random variables if the sample size is sufficiently large. No matter what the underlying distribution of the random data are, the sample means are going to be normally distributed. We can predict how far the sample means are going to be distributed based on the sample size. The more samples we get, the closer the sample means will be to the true mean. No matter the individual distribution of the random data, the more distributions there are, the sample means will be closer to the true mean. Just collect one sample of if know that sample distribution is sampling we can predict. Means that we can get a sample and be pretty close to the answer. Don't have to have the whole population. Bigger the sample, the closer we will be to the true mean. Demonstrated by class activities where most of our data would cluster around the center. Helps us because we can tell something is real because we will know we are in a certain range. We need to know the properties of this to know the real answer - without it we are in the woods and don't know what's real. The value we get from our samples belongs to the distribution possible estimates of the true value, the distribution of estimates will be about three standard errors wide on either side of the true beta, the standard error we get from our sample is a pretty solid approximation for the standard error we would get from any sample (samples of the same size). States that samples will center around 2 errors. Doesn't say say anything about the distribution population

Constant

the parameter in a regression model. It is the point at which a regression line crosses the y-axis. Also referred to as the intercept.

R-squared

the percentage of total variables we described

P-Value

the probability of observing a coefficient as high as we actually observed if the null hypothesis. The lower it is, the less consistent the estimated b1 is with the null hypothesis. We reject the null hypothesis if this is less than a. Can be useful to indicate the weight of evidence against a null hypothesis. Shows how likely to get that standard deviation away from the mean. If you are trying to prove a relationship exists, you want a lower one of these, which means that it is less likely that the relationship is zero. The lower this is, the more confident we can be that our estimated value does not belong to the distribution of estimates we would get if the true beta were zero and therefore that there is a real relationship between x and y. Is 0.00 if there is zero popularity that we would have observed this value if the true beta were zero. If the t-stat is 20, the corresponding value will be 0.00 - ie there is zero probability we would have observed something this big if it were really zero. Measures the percent chance you would estimate a beta that many standard errors away from 0 if the beta was 0. If its less than .05, we say our result is statistically significant because the change that we would find a coefficient this big if the beta is 0 is 5%. Tells us where on the hull hypothesis an estimate is falling. Telling us what are the chances I would get something this big if the value is zero. Doesn't change based on the beta because it has to do with the null hypothesis.

Randomization

the process of determining the experimental value of the key independent variables based on a random process

Distribution

the range of possible values for a random variable and the associated relative probabilities for each value

B1

the slope. Indicates how much change in y (the dependent variable) is expected if x (the independent variable) increases by 1 unit. Really care about this. The coefficient. This is the m in slope formula. Represents the rate of change.

Blocking

the solution to balance. If we group people into blocks by observable characteristics before we randomly assign, then we can randomly assign within blocks to make sure we get appropriate representation. Example - prior to random assignment, sort people by gender. Randomly assign half the men to participate and randomly assign half the women to participate. The resulting treatment and control groups will have exactly the same relative shares of men and women. Picking treatment and control groups so that they are covariables. Has to do with random assignment. This idea is that you want to make sure you don't get unlucky and get unbalanced groups of people in the treatment and control groups just by chance. For example, if you just randomly assign people to a treatment group, you might, just by chance, end up with more men in the treatment group than in the control group. To prevent this, you can block ahead of time by putting all the men in one list and randomly assigning half of them to the treatment group and the other half to the control group. You then do the same with women. The resulting treatment and control groups would be exactly balanced on gender - the treatment group would have exactly the same gender composition of the control group guaranteed

Alpha Level

the tipping point at which we say that the probability of our estimate being part of the zero distribution is sufficiently small that we think it probably belongs to a different distribution. The point at which we think we have enough evidence to say its not zero. The standard in research is to set it at .05. That means that when there's a 5% chance or less that it belongs to the zero distribution we are going to assume it doesn't actually belong to the zero distribution. If we set our alpha at .1, that would mean that if there's a 10% chance that it belongs to the zero distribution we say that's enough to convince us it's probably not zero.

Treatment of the treated

tot. the affect of the treatment on those who actually look the treatment rather than being offered it. Calculated by dividing the Itt by compliance the rate.

2sls

two-stage least squares. Uses exogenous variation in x to estimate the effect of x on y. Instrumental variables. In the first stage, the endogenous independent variable is the dependent variable and the instrument is an independent variable. In the second stage, the fitted value from the first stage is an independent variable. A good instrument satisfies two conditions: Must be statistically significant determinant of x. In other words, it needs to be included in the first stage of this estimation process. Must be uncorrelated with the error in the main equation, which means that must not directly influence y. In other words, an instrument must be excluded from the second stage of this estimation process. This condition cannot be directly assessed statistically. Useful for analyzing experiments when there is imperfect compliance with the experimental group. Assignment to treatment typically satisfies the inclusion and exclusion conditions needed for instruments in this. In the first stage, we are seeing how our instrument predicts the x variable we care about. In the second stage, we use predicted values of x based on our first stage regression to then see how this predicted x is related to the y variable.

Exogynist Variation

what we are looking for. Outside of the system

Reference category

when a model includes dummy variables indicating the multiple categories of a nominal variable, we need to exclude a dummy variable for one of the groups. The coefficient on all the included dummy variables indicate how much higher or lower the dependent variable is for each group relative to this. Also referred to as the excluded category. The reference point and all are the coefficients are indicate how much higher or lower each group is then the excluded category. Coefficients differ depending on which excluded category is used, but when interpreted appropriately, the fitted values for each category do not change across specifications

Fitted value

y hat is the value of predicated by our estimated equation. Also called prediction value. Estimation of how much time you can spend on two classes based on the slope. Turnout=b0+b1intervention+b2post+b3intervention_post. For treatment groups, in the before period would be b0+b1. This means that the voter would for groups who received the treatment before the intervention was approved was b0+b1.


Related study sets

LAS RELACIONES INTERPERSONALES-TODO EN ESPAÑOL

View Set

Health Assessment Missed PrepU Questions cha 1

View Set

Marketing Chapter 5 Understanding Consumer (B2C) and Buyer (B2B) Behavior

View Set

Amanda Bridges - Dislocated Shoulder, Shoulder and Elbow Anatomy

View Set