Econometrics Minitest 1

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

4| What does it mean if two models are nested?

One model is a subset of the other model.

3| Chow Test

Type of F-test used to test whether the regression of two groups differs significantly. We calculate the restricted model for both groups and run a Chow-test (like F-test but no need to calculat SSR(ur) since it's the sum of SSR(1) + SSR(2)).

4| Alternative form of the White test

The squared residuals are regressed on y^ and y^2 and the corresponding R-squared used to form an F or LM statistic. y^ is a function of all the x's. Thus, y^2 will be a function of the squares and cross products. Thus, y^ and y^2 can proxy for all of the x, x^2, and x×x.

4| Weighted least squares (WLS)

Used to obtain more efficient estimates than OLS. Basic idea: Transform the model/data into one that has homoskedastic errors. Prerequisite: Knowledge about the specific form of heteroskedasticity.

2| F-test

Used to test the joint null hypotheses that the joint effect of variables is zero. Includes "exclusion restrictions". Measures the relative increase of SSR when moving from the unrestricted to the restricted model. If it is big enough (above the critical value) the null hypothesis that the joint effect of the exluded coefficients is equal to 0 can be rejected. Thus the unristricted model is better than the restricted model. If only one exclusion is tested F = t squared.

5| Degress of freedom in a fixed effects model

df = N (T - 1) - k; because df were used up wehen calculating means (think of ai as estimated parameters)

3| If the scale of y is changed from cents to dollars, the coefficients must be...

...divided by 100.

3| If the scale of x is changed from cents to dollars, the coefficient must be...

...multiplied by 100.

2| For a single restriction the F-statistic is equal to...

...the squared t-statistic.

3| What are potential problems of program evaluation?

(1) Disability to control for all observable relevant characteristics except program participation (2) Self-selection

3| What are statistical reasons to use log models?

(1) For models with y > 0, the conditional distribution is often heteroskedastic or skewed, while ln(y) is much less so. (2) The distribution of ln(y) is more narrow, limiting the effect of outliers.

1| Solutions to Omitted Variable bias

(1) Include omitted variable (2) The direction and level of the effect is known and considered (3) use proxy variable.

3| What are practical reasons to use log models?

(1) Log models are invariant to the scale of the variables since they are measuring percent changes. (2) They give a direct estimate of elasticity.

3| What are potential problems of linear probability models?

(1) The prediction can be outside of the [0,1] interval (2) we may estimate effects that imply a change in x changes the probability by more than +1 or -1 (3) they are heteroskedastic by definition.

4| How can you check if you used the right functional form?

(1) Use economic theory for guidance (2) Ramsey's RESET (REgression Specification Error Test) (3) Nonnested Alternative Tests

5| Advantages of fixed effects estimation

1) Allows for arbitrary correlation between the ai and xi (e.g. ability and education) 2) Interpretation is intuitive: Everything is interpreted as changes over time (changes relative to the average= 3) Very flexible: FEs can be added at different levels (individual, industry country, time), so group-constant or time-constant fixed effects can be controlled for.

What conditions must an Instrumental Variable fulfill?

1) Exogeneity: Cov (z, u) = 0the instrument must not be correlated with the error term 2) Relevance: Cov (z, x) ≠ 0; the instrument must be correlated with the endogenous variable x

6| How can you deal with OVB (omitted variable bias)

1) Ignore it Can be ok if we know the direction of the bias and underestimate the effect we are interested in 2) Find a suitable proxy Replace unobservable variable with related, observable variable (e.g. proxy for ability using IQ). It's not always possible to find a satisfactory proxy 3) Assume OVB is time-constant and use FE models ◦ Needs panel data (note: effect will be identified only through people experiencing changes in their education level) ◦ Controls for all unobserved, time-constant heterogeneity 4) Use IV estimation

5| Disadvantages of differencing

1. Reduces Var(x&y): reduces the variation in differenced variables 2. Exacerbates measurement error: if the error is absolute it matters more if the variables are closer to zero 3. N = T - 1: Decreases the number of observations to by one (T - 1)

3| Adjusted R-Squared

A goodness-of-fit measure in multiple regression analysis that penalizes additional explanatory variables by using a degree of freedom adjustment. In contrast to the regular R-squared it may decrease if new variables are added. If the Y is the same in two models they can be compared using the adj. R-squared.

0| Covariance

A measure of linear dependence between two random variables that depends on the unit of measurement. It can take any value from -∞ to +∞.

0| Correlation

A measure of linear dependence between two random variables that does not depend on units of measurement. It is bounded between -1 and 1.

0| Variance

A measure of the spread in the distribution of a random variable.

0| Statistical Significance

A result has statistical significance when it is very unlikely to have occurred given the null hypothesis.

2| Confidence Interval

A rule used to construct a random interval so that a certain percentage of all data sets, determined by the confidence level, yields an interval that contains the population value. E.g. with a probability of 95% the true value is in the confidence interval.

4| Breusch-Pagan Test

A test for heteroskedasticity where the squared OLS residuals are regressed on the explanatory variables in the model. The resulting R-squared (how much of the variation of the residuals can be explained by the explanatory variables) is used for an F or LM statistic. If they are sufficiently large the null hypothesis (homoskedasticity) is rejected. The Breusch-Pagan test will only detect linear forms of heteroskedasticity.

1| Classical view of Econometrics (aim & focus)

Aim: a 'true' model that describes the economic process in general; Focus: Theory, statistics, math

1| Modern view of Econometrics (aim & focus)

Aim: determine specific causal relationship or treatment effects; Focus: Empirical applications

3| Dummy Variable

Also called binary variable. Takes on the value of 0 or 1. Can be interpreted as an intercept shift.

2| Asymptotic Omitted Variable Bias

Also called inconsistency bias. OVB remains a problem in the asymptotic case, meaning that the problem does not go away by simple adding more data. Solutions: take account of bias, proxy variable, get data on the ommited variable and include it in the model.

1| Biased Estimator

An estimate whose expectation, or sampling mean is different from the population value it is supposed to be estimating.

4| Proxy variable

An important but unmeasureable variable can be replaced by a proxy variable to avoid OVB (also called plug-in solution to the omitted variables problem). A proxy variable captures the effect of the latent variable of interest which is not available (Therefore the proxy and the latent variable are correlated). For the model to be unbiased (1) the expected value of u, given the proxy, latent and the other variables, must be zero & (2) once the proxy variable is controlled for, the expected value of the latent variable does not depend on the other variables. If biased this bias may still be smaller than the omitted variable bias, though.

3| Interaction Term

An independent variable in a regression model that is the product of two explanatory variables.

4| What is the consequence of a measurement error of the dependent variable?

As long as the error is uncorrelated with the independent variables the betas are unbiased, but their variances are larger. (Example for biased model: Effect of GDP on something. Measurement error is correlated with GDP. Lower GDP worse data.) β0 is biased if the expectation of the measurement error is unequal to zero.

1| What does BLUE stand for and what do ist components mean?

Best: Lowest Variance; Linear: MLR.1; Unbiased: Probability distribution of βhat centred on true value (Gauss-Markov-Theorem); Estimator

4| Possible tests for Heterosekdasticity

Breusch-Pagan test, White test, alternative form of the White test

5| How are coefficients interpreted after applying fixed effect estimation?

Changes in x affect changes in y by beta. Only including explanatory power of the regressors (Within-R-squared)

5| Possible disadvantages of fixed effects estimation

Coefficients can only be estimated for variables that change over time. Thus, time-constant variables (e.g. gender, industry, country) cannot be estimated using FE estimation, but will be subsumed in the ai term. There is collinearity between a constant dummy (e.g. gender) and the time-invariant individual error term. This can be a limitation, but also an advantage: all time-constant unobserved heterogeneity is 'automatically' taken care of using FE estimation. For example individual ability is controlled for.

0| Cross-Sectional Data Set

Data collected by observing many subjects at the one point in time.

3| How can you test whether a regression function is different for two groups?

F-test: We can test the joint significance of the dummy and its interactions with all other x variables thereby comparing the model with all the interactions and without and form an F statistic. OR Chow-test: We run the restricted model for both groups and run a Chow-test (like F-test but no need to calculat SSR(ur) since it's the sum of SSR(1) + SSR(2).

0| Statistical Insignificance

Failure to reject the null hypothesis, at the chosen significance level.

3| For a model of the form Y = β0 + β1X + β2X^2 + u , what does it mean if the first coefficient is positive and the second coefficient is negative?

First the linear term dominates and Y is increasing in X but as X gets bigger eventually the second coefficient dominates and Y decreases in X.

4| Generalized Least Squares (GLS)

GLS is a weighted least squares (WLS) procedure where each squared residual is weighted by the inverse of Var(ui|xi). In the case of heteroskedasticity GLS is BLUE and more efficient than OLS.

1| R-Squared

Gives some information about the goodness of fit of a model, namely the proportion of the total sample variation in the dependent variable that is explained by the independent variables. It is monotonically increasing in variables.

2| Testing Linear Combinations

H0: β1 = β2. An estimator for Cov (βˆ1, βˆ2) is needed.

4| Why should one worry about Heterosekdasticity?

Heteroskedasticity means: (1) The estimators of coefficient variances Var (b^) are biased; (2) Since OLS standard errors are based on these estimators, the standard errors are biased and we can not use the usual t, F or LM statistics for drawing inferences; (3) OLS is not BLUE.

1| MLR Ass.5

Homoskedasticity: The variance of the error is the same for every x.

4| When is missing data a problem?

If data is missing non-randomly, estimates will be biased (e.g. high income.individuals refuse to provide income data).

4| Nonnested Alternative Tests

If the models have the same dependent variables, but nonnested x's, one could just make a comprehensive model with both sets of x's and test joint exclusion restrictions that lead to one model or the other. Alternatively: Davidson-MacKinnon test: The y1ˆ of the 1st model is included as a regressor (explanatory variable) in the 2nd model. If the coefficient of y1ˆ is significant the 1st model is better than the 2nd. The same is done the the other way around (y2ˆ of 2nd model added as an regressor in the 1st model). Prerequisite. the models have the same Y variable. Drawback: the Davidson-MacKinnon test may reject neither or both models rather than clearly preferring one specification.

1| When does the interpretation of the intercept make sense?

If the sample contains values of X around the origin.

4| Lagged dependent variable

If unobserved explanatory variables are time-independent (that is the have not changed), we could use the lagged value of the outcome (y(t-1) as a regressor to account for the unobserved influences (i.e. keeping them fixed).

5| Autocorrelation of the error term

If unobserved fixed effects are present but neglected, the errors of the model will exhibit autocorrelation because ν(it) = a(i) + u(it) constant

1| Degrees of Freedom

In multiple regression analysis, the number of observations minus the number of estimated parameters.

3| Linear Probability model

In a linear probability model y is a binary variable and the predicted y is the predicted probability of success.

2| Restricted Model

In hypothesis testing, the model obtained after imposing all of the restrictions required under the null.

2| Unrestricted Model

In hypothesis testing, the model that has no restrictions placed on its parameters.

0| Critical Value

In hypothesis testing, the value against which a test statistic is compared to determine whether or not the null hypothesis is rejected.

5| When do we need to include fixed effects?

In most cases since individual effects are present in most panel applications.

0| Independent variable

In regression analysis, a variable that is used to explain variation in the dependent variable. Also called explanatory variable, regressor, control variable and predictor variable.

3| Residual analysis

Is the comparison of observed vs. predicted values (= residuals). A positive residual means an above-average Y.

3| Interpret β1: Y = β0 + β1X1 + u

Level-Level-Model: The unit change in Y if X1 is changed by one unit.

3| Interpret β1: Y = β0 + β1 ln(X1) + u

Level-Log-Model: The approximate absolute change in Y if X1 is changed by 100% (i.e. doubled). β1 could also be divided by 100 to get the change of Y if X is changed by 1%.

1| MLR Ass.1

Linear in parameters β, the x's can enter non-linearly.

3| Interpret β1: ln(Y) = β0 + β1X1 + u

Log-Level-Model: Semi-Elasticity. The approximate percentage change (β1×100) in Y if X1 is changed by one unit. (This approximate change lies between the exact percentage changes for an increase and a decrease)

3| Interpret β1: ln(Y) = β0 + β1 ln(X1) + u

Log-Log-Model: The percentage change in Y if X1 is changed by 1% (Elasticity of Y wrt X).

3| What functional forms exist?

Logarithmic, quadratic and interaction forms.

5| Fixed effects vs. random effects

Main difference lies in the RE assumption Cov (x(it), a(i)) = 0 • FE allows for arbitrary correlation of the ai and the x's , while RE does not. Thus, in general, FE is more robust than RE, because we make no assumptions on the functional relation of individual effects and regressors. We often have reason to believe that something unobserved is correlated with the x's. • If the RE assumption holds, RE is more efficient than OLS or FE. • If the variable of interest is time-varying, FEs are usually the way to go. In policy analysis, FE is the more convincing option unless there is an explicit policy experiment (e.g. kids being randomly assigned to classes of different size). • If the variable of interest is time-constant, FEs are infeasible. • RE requires the probability of data missing to be uncorrelated with the error term (i.e. random exit); while for FE, sample attrition can be non-random (since it allows the ai to correlate with other regressors) Use FE unless you can't!

1| What is the effect of adding an irrlevant variable to a model on unbiasedness and variance?

No effect on unbiasedness but likely to increase variance of the OLS estimators because of multicollinearity.

1| MLR Ass.3

No perfect collinearity: no independent variable is constant (there is sample variation) and there exists no exact linear relationship among the independent variables.

2| MLR Ass.6

Normal Distribution of the Error

4| Sample selection bias

Occurs if the sample is chosen on the basis of the y variable. If the sample is chosen on the basis of an x variable, then estimates are unbiased, (but target population has to be redefined). Selection on the y variable will induce a bias.

4| How can you deal with unobserved variables?

Proxy variables or lagged dependent variables.

4| Ramsey's RESET

REgression Specification Error Test is used to test whether the right functional form is used. It relies on a similar trick to the alternative form of the White test (testing functions of y^). Rejection of H0 is evidence for functional form misspecification, but (of course) doesn't tell us what the correct form is.

1| MLR Ass.2

Random sampling: meaning the data represents the population of interest.

4| Outliers

Reasons: (1) data entry error (can be fixed, if known), (2) real outliers can be dropped although readers may prefer to see estimates with and without the outliers

1| Multicollinearity

Refers to correlation among the independent variables in a multiple regression model.

1| SSR

Residual sum of squares, sample variation of u(hat), "unexplained variation"

5| First differences vs. fixed effects

Same if T = 2 If T > 2: =) both are consistent and unbiased 1) FE is more efficient if the u(it) are serially uncorrelated 2) FE can be easily implemented for unbalanced panels 3) FE might have problems if N is small and T is large

4| Robust Standard Errors

Short for heteroskedasticity-robust standard errors, which are used for inference with t statistics. Can only be used in the asymptotic case and not for small samples.

1| SSE

Sum of squares explained by a model, sample variation in y(hat), "explained variation"

2| Exclusion restriction

Testing "exclusion restrictions" involves the question if the inclusion of variables significantly improves a model. Can be tested using the F-test.

5| Test for unobserved fixed effects

Testing for unobserved fixed effects can be tested with a simple test for AR(1) autocorrelation: H0: Var(ai)=0 1. Obtain an estimate for residuals from a pooled OLS regression of y(it) on x(it) 2. Regress y(it) on x(it) and û(it−1) 3. Do a t-test on the significance of û(it−1) If the t-test is significant we have autocorrelation of the residuals and this unobserved fixed effects.

2| Joint Hypothesis Test

Testing multiple hypotheses at a time, also called testing "exclusion restrictions", Variables can be jointly significant even if all single parameters are insignificant. F-Test needed comparing the unrestricted with the restricted model. (See als "exclusion restriction" and "F-Test")

2| Special case of exclusion restrictions

Testing the overall significance of parameters involves the restricted model containg only the intercept (R-squared of which will be 0)

2| LM-test

The LagrangeMultiplier-test, also called Score-test, is an alternative to the F-Test for testing exclusion restrictions. Using only the restricted model, and regress the obtained residuals on all variables to obtain an R-squared. LM=n*R-squared. It has a chi-square distribution. Main advantage: It only requires estimating the restricted model. How much of the variation of the residuals of the restricted model can be explained by the all variables? High -> include variables. Low -> don't include variables. Unlike the F-test and the (squared) t-test for one exclusion the LM-test and F-test will not be identical. But with a large sample, the result from an F test and from a LM test should be similar.

4| White test

The White test allows for nonlinear forms of heteroskedasticity by using squares and cross products of all the x's (in addition to all the x's). F or LM tests are used to test whether all the xj , x2j, and xjxh are jointly significant. The squared OLS residuals are regressed on the explanatory variables, the squares of the explanatory variables, and all interactions of the explanatory variables. Problems if k is large: (1) running out of degrees of freedom, (2) more variables means that a (random) significant result is more likely since we are testing for the joint significants, thus the White test gets unwieldy pretty quickly.

1| Omitted Variable Bias

The bias that arises in the OLS estimators when a relevant variable is omitted from the regression. A relevant variable determines Y and is correlated with one or more of the included explanatory variables.

3| For a model of the form Y = β0 + β1X + β2X^2 + u , what is the change of Y wrt X?

The change in y is equal to the partial derivative for x, which is β1 + 2*β2X.

1| Upward/positive Bias

The expected value of an estimator is greater that the population parameter value.

0| Residual

The difference between the actual value and the fitted (or predicted) value; there is a residual for each observation in the sample used to obtain an OLS regression line.

0| Partial Effect

The effect of an explanatory variable on the dependent variable, holding other factors in the regression model fixed.

1| Homoskedasticity

The errors in a regression model have constant variance conditional on the explanatory variables

0| Fitted Values

The estimated values

1| Mean Squared Error

The expected squared distance that an estimator is from the population value. It equals the variance plus the square of any bias.

1| Downward/negative Bias

The expected value of an estimator is below the population value of the parameter.

4| What is the consequence of a measurement error of a dependent variable? (?)

The effect of measurement error on OLS estimates depends on our assumption about the correlation between e1(measurement error = x1 − x1'*') and x1. If the Cov (x1, e1) = 0, that is the measured values are uncorrelated with the measurement error, OLS remains unbiased, though variances are larger. If Cov(x1,e1)≠0 then Cov(x1'*',e1)=0, that is there is no correlation between the real value and the measurement error, but there is correlation between the measured values and the measurement error. This causes an attenuation bias: The coefficients are always weaker (lower in magnitude) than the really are. [x1: measured value containing error] [x1'*': true value]

3| If we add an interaction term of a dummy and a continuous variable what changes if the dummy variable changes?

The intercept and the slope.

2| Intercept Shift

The intercept in a regression model differs by group or time period.

3| For this model, Y = β0 + β1X1 + β2X2 + β3X1X2 + u , what is the change of Y wrt X?

The marginal effect of X1 depends on the level of X2 (X2 is typically evaluated at the sample average)

2| Numerator Degrees of Freedom (F-Test)

The number of restrictions being tested in an F-Test.

2| Central Limit Theorem

The sample means of ANY population are asymptotically normally distributed N(0,1). Based on this theorem it can be shown that OLS estimators are asymptotically normal, therefore normality (MLR.6) does not need to be assumed with a large sample.

2| Confidence Level

The percentage of samples in which we want our confidence interval to contain the population value. 95% is the most commonly used.

2| p-value

The probability of obtaining a result (t-statistic) at least as extreme, given that the null hypothesis were true. The smallest significance level at which the null hypothesis can be rejected. The probability of observing the realized t-statistic if the null Hypotheses were true.

0| Error Term

The variable in a simple or multiple regression that contains unobserved factors that effect the dependent variable.

0| Dependent Variable

The variable to be explained in a multiple regression model.

1| Heteroskedasticity

The variance of the error term, conditional on the explanatory variables, is not constant

3| Why are functional forms used?

To implement nonlinear functions of X and Y. (Model only needs to be linear in parameters but not in the X's)

1| SST

Total sum of squares, total sample variation of the of y, "total variation"

4| Feasible GLS

Typically we do not know the form of the heteroskedasticity, so we need to estimate it *(1)* Square and log the residuals from the original OLS *(2)* Regress them on all of the independent variables and get the fitted values for the residuals gˆ. *(3)* Do WLS using 1/exp(gˆ) as the weight.

1| Theorem 1: Unbiasedness of OLS

Under MLR.1 through MLR.4 the OLS estimators are unbiased estimators of the population parameters, meaning the probability distribution of β^ is centered on the true value.

1| Theorem 2: Variance of the OLS estimator

Under MLR.1 through MLR.5 the Var(β^)= [σ2 / SST (1-R2)] Thus the variance of βhat depends positively on the variance of the error terms and negatively on SST. SST increases if sample size or variance of explanatory variable increases. R2 is obtained from regressing the independent variable on all the other variables (linear relationship). Thus the higher the linear relationship between the independent variables the higher the variance of β^.

1| Theorem 3: Unbiased Estimation of σ2

Under MLR.1 through MLR.5 the estimation of σ2 (variance of the error term) is unbiased.

1| Theorem 4: Gauss-Markov-Theorem

Under MLR.1 through MLR.5 the the estimated model is BLUE (Best Linear Unbiased Estimation) and consistent.

2| Assumptions of the classic linear model

Under MLR.1 through MLR.6 OLS is not only BLUE, but also the minimum variance unbiased estimator (meaning that if the errors are normally distributed the linear OLS estimator is better than any other linear or non-linear estimator).

2| Asymptotic variance

Under the Gauss-Markov-Assumptions OLS has the lowest asymptotic variance, meaning for a given sample size it has the smallest variance around the true beta value.

2| Consistency of OLS

Under the Gauss-Markov-Assumptions the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated as the number of observations goes to infinity. The distribution of the estimator collapses to the true parameter value. Consistency also means that the t- and F-statistics are asymptotically t- and F-distributed.

5| Diffferences-in-differences

Used if assignement to treatment and control group is not random. [model equ.jpg]

5| Within R-squared vs. overall R-squared

Using a fixed effects model only includes the explanatory power of the regressors (within-R-squared), while the "normal" model als includes the explanatory power of individual constants (overall-R-squared). Because the between (individuals) variation is typically larger than the within (within) variation the overall-R-squared is typically larger than the within-R-squared.

3| What types of variables are often used in level form?

Variables measured in years. Variables that are a proportion or percent.

4| In the case of heteroskedasticity, what are more efficient estimates than OLS?

Weighted least squares (WLS), Generalized Least Squares (GLS), Feasible GLS.

4| If the sample size is small, what do I use to account for Heteroskedasticity?

Weighted least squares (WLS), Generalized Least Squares (GLS), Feasible GLS. If N is sufficiently large estimating robust standard errors is sufficient and the use of WLS & (feasible) GLS is not necessary.

1| Endogenous variable

When the explanatory variable is correlated with the error term.

1| Exogeneity

When the explanatory variable is uncorrelated with the error term

5| Time-constant individual-specific component of the error

You can get rid of the time-constant individual-specific component of the error by first differences (substracting one period from the other) or by fixed effect estimation (substracting the indivudal-specific mean)

1| MLR Ass.4

Zero conditional mean: The error u has an expected value of zero given any values of the independent variables.

5| If ai is constant how can it be correlated with the x's?

ai is only constant for individuals over time but not across individuals. Therefor it can be correlated with the x's which biases OLS.

3| Beta coefficient

also called standardized coefficient, Y and all X are standardized (substract mean and divide by sd) to get beta coefficients that reflect the change in standard deviations of Y for a one standard deviation change in X, can be useful if changes in Y or X (or both) are hard to interpret, e.g. effect of competition control measures (standardized to be comparable) on market outcomes

3| When can log models not be used?

cannot be used for variable values < 0 (Though using log (x + 1) is usually acceptable to include zeroes)

2| t-statistic

estimator minus its hypothesized value divided by the standard error of the estimator

3| For n categories how many dummy variables should be used?

n-1 E.g. male(m) or female(f); high-school dropout (0), HS grad only(1), or college grad(2). Six categories: m0, m1, m2, f0, f1, f2. Five dummy variables needed, e.g. m, 1, 2, m×1 and m×2. In this case the base group would be female HS dropouts (f0).

3| For a model of the form Y = β0 + β1X + β2X^2 + u , what is the turning point if β1>0 and β2<0?

x* = |β1 / 2×β2|

3| What is the interpretation of β in a linear probability model?

β is the change in the probability of success when x changes.

5| Random effects model

• "pretty useless in econometrics" • Cov (x(it), a(i)) = 0: (i) and u(it) have to be uncorrelated (e.g. ability is uncorrelated with education) -> heroic assumption in economics • The composite error excibits autocorrelation since a(i) is constant. If the within-variation is high and the between-variation is low the correlation of the error term is high. • The RE estimator subtracts a fraction of the time averages, which makes the errors serially uncorrelated: To obtain correct s.e.'s and to perform inference, we need to transform the model and do Generalized Least Squares (GLS). For this we substract a portion of the data from itself. So it is a weighted average of OLS (where we assume the errors are uncorrelated) and FE (where we assume that the are correlated). • The random effects estimator is inbetween the OLS (θ = 0) and the FE estimator (θ = 1). The bigger the variance of the unobserved effect, the closer it is to FE. The smaller the variance of the unobserved effect, the closer it is to OLS. θ can be estimated using sample standard errors.

3| What types of variables are often used in log form?

• Dollar amounts that must be positive • Very large variables, such as population


संबंधित स्टडी सेट्स

Psych Prep U Neurobiologic Theories and Psychopharmacology

View Set

Distinguish between economic globalisation and political globalisation - 15

View Set

Chapter Eight: Understanding Big Data and Its Impact on Business Review Questions

View Set