econometrics midterm!

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

4) The following are all least squares assumptions with the exception of: A) The conditional distribution of ui given Xi has a mean of zero. B) The explanatory variable in regression model is normally distributed. C) (Xi, Yi), i = 1,..., n are independently and identically distributed. D) Large outliers are unlikely.

B) The explanatory variable in regression model is normally distributed.

11) The OLS residuals A) can be calculated using the errors from the regression function. B) can be calculated by subtracting the fitted values from the actual values. C) are unknown since we do not know the population regression function.

B) can be calculated by subtracting the fitted values from the actual values.

5) The correlation coefficient A) lies between zero and one. B) is a measure of linear association. C) is close to one if X causes Y. D) takes on a high value if you have a strong nonlinear relationship.

B) is a measure of linear association.

8) The normal approximation to the sampling distribution of 1 is powerful because A) many explanatory variables in real life are normally distributed. B) it allows econometricians to develop methods for statistical inference. C) many other distributions are not symmetric. D) is implies that OLS is the BLUE estimator for β1.

B) it allows econometricians to develop methods for statistical inference.

6) Analyzing the effect of minimum wage changes on teenage employment across the 48 contiguous U.S. states from 1990 to 2012 is an example of using A) time series data. B) panel data. C) having a treatment group vs. a control group, since only teenagers receive minimum wages. D) cross-sectional data.

B) panel data.

7) Multiplying the dependent variable by 100 and the explanatory variable by 100,000 leaves the A) heteroskedasticity-robust standard errors of the OLS estimators the same. B) regression R2 the same. C) OLS estimate of the intercept the same. D) OLS estimate of the slope the same.

B) regression R2 the same.

18) One of the following steps is not required as a step to test for the null hypothesis: A) compute the standard error of B1. B) test for the errors to be normally distributed. C) compute the t-statistic. D) compute the p-value.

B) test for the errors to be normally distributed.

10) The regression R2 is a measure of A) whether or not X causes Y. B) the goodness of fit of your regression line. C) whether or not ESS > TSS. D) the square of the determinant of R.

B) the goodness of fit of your regression line.

If variables with a multivariate normal distribution have covariances that equal zero, then A) the correlation will most often be zero, but does not have to be. B) the variables are independent. C) you should use the distribution to calculate probabilities. D) the marginal distribution of each of the variables is no longer normal.

B) the variables are independent.

8) Heteroskedasticity means that A) homogeneity cannot be assumed automatically for the model. B) the variance of the error term is not constant. C) the observed units have different preferences. D) agents are not all rational.

B) the variance of the error term is not constant.

4) Studying inflation in Palestine from 1995 to 2012 is an example of using A) randomized controlled experiments. B) time series data. C) panel data. D) cross-sectional data.

B) time series data.

10) The sample average of the OLS residuals is A) some positive number since OLS uses squares. B) zero. C) unobservable since the population regression function is unknown. D) dependent on whether the explanatory variable is mostly positive or negative.

B) zero.

11) In the simple linear regression model Yi = β0 + β1Xi+ui, A) the intercept is typically small and unimportant. B) β0 + β1Xi represents the population regression function. C) the absolute value of the slope is typically between 0 and 1. D) β0 + β1Xi represents the sample regression function.

B) β0 + β1Xi represents the population regression function.

When the estimated slope coefficient in the simple regression model, Bhat1 is zero, then Rˆ2 = A) R2 = Ybar B) 0 < R2 < 1. C) R2 = 0. D) R2 > (SSR/TSS).

C) R2 = 0.

9) The OLS residuals, i , are defined as follows: A) Yi^- B0 ^- B1^Xi B) Yi- β0 - β1Xi C) Yi- Yi^ D) (Yi-Ybar )2

C) Yi- Yi^

15) Binary variables A) exclude certain individuals from your sample. B) are generally used to control for outliers in your sample. C) can take on only two values. D) can take on more than two values.

C) can take on only two values.

1) Analyzing the behavior of unemployment rates across Palestine in March of 2006 is an example of using A) time series data. B) panel data. C) cross-sectional data. D) experimental data.

C) cross-sectional data.

6) In the simple linear regression model, the regression slope A) indicates by how many percent Y increases, given a one percent increase in X. B) when multiplied with the explanatory variable will give you the predicted Y. C) indicates by how many units Y increases, given a one unit increase in X. D) represents the elasticity of Y on X.

C) indicates by how many units Y increases, given a one unit increase in X.

17) Finding a small value of the p-value (for example less than 5%) A) indicates evidence in favor of the null hypothesis. 4 B) implies that the t-statistic is less than 1.96. C) indicates evidence in against the null hypothesis. D) will only happen roughly one in twenty samples.

C) indicates evidence in against the null hypothesis.

11) The sample regression line estimated by OLS A) has an intercept that is equal to zero. B) cannot have negative and positive slopes. C) is the line that minimizes the sum of squared prediction mistakes. D) is the same as the population regression line.

C) is the line that minimizes the sum of squared prediction mistakes.

13) Multiplying the dependent variable by 100 and the explanatory variable by 100,000 leaves the A) OLS estimate of the slope the same. B) OLS estimate of the intercept the same. C) regression R2 the same. D) variance of the OLS estimators the same.

C) regression R2 the same.

12) To obtain the slope estimator using the least squares principle, you divide the A) sample variance of X by the sample variance of Y. B) sample covariance of X and Y by the sample variance of Y. C) sample covariance of X and Y by the sample variance of X. D) sample variance of X by the sample covariance of X and Y.

C) sample covariance of X and Y by the sample variance of X.

6) The t-statistic is calculated by dividing A) the OLS estimator by its standard error. B) the slope by 1.96. C) the estimator minus its hypothesized value by the standard error of the estimator. D) the slope by the standard deviation of the explanatory variable.

C) the estimator minus its hypothesized value by the standard error of the estimator.

5) The confidence interval for the sample regression function slope A) allows you to make statements about the economic importance of your estimate. B) can be used to compare the value of the slope relative to that of the intercept. C) adds and subtracts 1.96 from the slope. D) can be used to conduct a test about a hypothesized population regression function slope.

D) can be used to conduct a test about a hypothesized population regression function slope.

The expected value of a discrete random variable A) is the outcome that is most likely to occur. B) can be found by determining the 50% value in the c.d.f. C) equals the population median. D) is computed as a weighted average of the possible outcome of that random variable, where the weights are the probabilities of that outcome.

D) is computed as a weighted average of the possible outcome of that random variable, where the weights are the probabilities of that outcome.

5) The OLS estimator is derived by A) connecting the Yi corresponding to the lowest Xi observation with the Yi corresponding to the highest Xi observation. B) making sure that the standard error of the regression equals the standard error of the slope estimator. C) minimizing the sum of absolute residuals. D) minimizing the sum of squared residuals.

D) minimizing the sum of squared residuals.

13) E(ui Xi) = 0 says that A) dividing the error by the explanatory variable results in a zero (on average). B) the sample regression function residuals are unrelated to the explanatory variable. C) the sample mean of the Xs is much larger than the sample mean of the errors. D) the conditional distribution of the error given the explanatory variable has a zero mean.

D) the conditional distribution of the error given the explanatory variable has a zero mean.

5) The sample regression line estimated by OLS A. will always have a slope smaller than the intercept. B. is exactly the same as the population regression line. C. cannot have a slope of zero. D) will always run through the point (X¯,Y¯).

D) will always run through the point (X¯,Y¯).

3) To decide whether or not the slope coefficient is large or small, A) the slope coefficient must be statistically significant. B) the slope coefficient must be larger than one. C) you should change the scale of the X variable if the coefficient appears to be too small. D) you should analyze the economic importance of a given increase in X.

D) you should analyze the economic importance of a given increase in X.

14) Assume that you have collected a sample of observations from over 100 households and their consumption and income patterns. Using these observations, you estimate the following regression Ci = β0+β1Yi+ ui where C is consumption and Y is disposable income. The estimate of β1 will tell you A) ∆Income/∆Consumption B) The amount you need to consume to survive C) Income/Consumption D) ∆Consumption/∆Income

D) ∆Consumption/∆Income

2) The regression R2 is defined as follows:

ESS/TSS TSS=ESS+SSR

8) E(ui | Xi) = 0 says that a. dividing the error by the explanatory variable results in a zero (on average). b. the sample regression function residuals are unrelated to the explanatory variable. c. the sample mean of the Xs is much larger than the sample mean of the errors. d. the conditional distribution of the error given the explanatory variable has a zero mean.

d. the conditional distribution of the error given the explanatory variable has a zero mean.

3) The reason why estimators have a sampling distribution is that a. economics is not a precise science. b. individuals respond differently to incentives. c. in real life you typically get to sample many times. d. the values of the explanatory variable and the error term differ across samples.

d. the values of the explanatory variable and the error term differ across samples.

TestScore = 690.9-2.28STR R^2=.051 Interpret R^2

5.1% of the sample variance in y is explained by the sample variation in x.

The main advantage of using multiple regression analysis over differences in means testing is that the regression technique A) gives you quantitative estimates of a unit change in X. B) assumes that the error terms are generated from a normal distribution. C) allows you to calculate p-values for the significance of your results. D) provides you with a measure of your goodness of fit.

A) gives you quantitative estimates of a unit change in X.

6) The slope estimator, β1, has a smaller standard error, other things equal, if a. there is more variation in the explanatory variable, X. b. there is a large variance of the error term, u. c. the sample size is smaller. d. the intercept, β0, is small.

a. there is more variation in the explanatory variable, X. large var(Xi)

Consider the following regression model: Yi=B0+B1Xi+Ui. If the first four Gauss-Markov assumptions hold true, the error term contains heteroskedasticity, then a) Var(ui|xi)=0 b)Var(ui|xi)=1 c) Var(ui|xi)=6^2i d) Var(ui|xi)=6

c) Var(ui|xi)=6^2i

Which of the following is true of heteroskedasticity? a) heteroskedasticity causes inconsistency in the OLS estimators b) population r^2 is affected by the presence of heteroskedasticity c) the OLS estimators are not the best linear unbiased estimators if heteroskedasticity is present d) it is not possible to obtain the F stats that are robust to heteroskedasticity of an unknown form

c) the OLS estimators are not the best linear unbiased estimators if heteroskedasticity is present


Ensembles d'études connexes

Chapter 9 Quiz - Supply Management

View Set

بيتر ميلاد : شرح وحدات اللغة الأنجليزية 3ث

View Set

Ch. 11: Delivering Your Speech, COM 181:,Basic Public Speaking

View Set

2401AHS Therapeutic Exercise : Neuromuscular Viva

View Set

Intermediate Accounting Chapter 9

View Set