eco 4k ch 5-8 other mc q

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

In practical econometric​ applications,

, it is better to assume that the errors might be heteroskedastic unless you have compelling reasons to believe otherwise.

The adjusted Upper R squared​, or Upper R overbar squared​, is given​ by:

. 1 minus StartFraction n minus 1 Over n minus k minus 1 EndFraction StartFraction SSR Over TSS EndFraction .

Consider the following regression​ equation: Upper Y Subscript i Baseline equals beta 0 plus beta 1 Upper X Subscript i Baseline plus mu Subscript i​, where Upper X Subscript i​, Upper Y Subscript i​, beta 0​, beta 1​, and mu Subscript i denote the​ regressor, the​ regressand, the intercept​ coefficient, the slope​ coefficient, and the error term for the i Superscript th ​observation, respectively. Which of the following is the formula for calculating the​ heteroskedastic-robust estimator of the variance of ModifyingAbove beta with caret 1​?

. ModifyingAbove sigma with caret Subscript ModifyingAbove beta with caret 1 Superscript 2 Baseline equals StartFraction 1 Over n EndFraction times StartStartFraction StartFraction 1 Over n minus 2 EndFraction Summation from i equals 1 to n left parenthesis Upper X Subscript i minus Upper X overbar Subscript i right parenthesis squared ModifyingAbove mu with caret Subscript i Superscript 2 OverOver left bracket StartFraction 1 Over n EndFraction Summation from i equals 1 to n left parenthesis Upper X Subscript i minus Upper X overbar Subscript i right parenthesis right bracket squared EndEndFraction .

Suppose the population regression is in the​ form: ln Upper Y Subscript i Baseline equals beta 0 plus beta 1 Upper X Subscript i Baseline plus u Subscript i​, where beta 0​, beta 1 and u Subscript i represent the​ intercept, the slope coefficient of Upper X and the error term for the i Superscript th observation. What is the expected value of Upper Y Subscript i given Upper X Subscript i ​(Upper E left parenthesis Upper Y Subscript i Baseline vertial line Upper X Subscript i Baseline right parenthesis​)?

. Upper E left parenthesis Upper Y Subscript i Baseline vertial line Upper X Subscript i Baseline right parenthesis equals e Superscript beta 0 plus beta 1 Upper X Super Subscript i Baseline Upper E left parenthesis e Superscript u Super Subscript i Baseline vertial line Upper X Subscript i Baseline right parenthesis.

Consider a regression with two​ variables, in which Upper X Subscript 1 i is the variable of interest and Upper X Subscript 2 i is the control variable. Conditional mean independence​ requires:

. Upper E left parenthesis mu Subscript i vertical line Upper X Subscript 1 i comma Upper X Subscript 2 i Baseline right parenthesis equals Upper E left parenthesis mu Subscript i vertical line Upper X Subscript 2 i Baseline right parenthesis.

The following OLS assumption is most likely violated by omitted variables​ bias:

. Upper E left parenthesis mu Subscript i vertical line Upper X Subscript i right parenthesis equals 0.

Which of the following is the mathematical statement for the conditional mean independence​ assumption?

. Upper E left parenthesis u Subscript i Baseline | Upper X Subscript 1 i Baseline comma ... comma Upper X Subscript ki Baseline comma Upper W Subscript 1 i Baseline comma ... comma Upper W Subscript r i Baseline right parenthesis equals Upper E left parenthesis u Subscript i | Upper W Subscript 1 i Baseline comma ... comma Upper W Subscript ri Baseline right parenthesis.

Consider the polynomial regression model of degree r​, Upper Y Subscript i equals beta 0 plus beta 1 Upper X Subscript i plus beta 2 Upper X Subscript i Superscript 2 Baseline plusbulletbulletbulletplus beta Subscript r Baseline Upper X Subscript i Superscript r plus mu Subscript i. According to the null hypothesis that the regression is linear and the alternative that is a polynomial of degree r corresponds​ to:

. Upper H 0 : beta 2 equals 0 comma beta 3 equals 0 comma ... comma beta Subscript r equals 0 vs. Upper H 1 : at least one beta Subscript j not equals 0 comma j equals 2 comma ... comma r.

Which of the following statements is​ true?

. Upper R overbar squared can be negative when the​ regressors, taken​ together, reduce the sum of squared residuals by such a small amount that this reduction fails to offset the factor StartFraction n minus 1 Over n minus k minus 1 EndFraction .

A study tests the effect of earning a​ Master's degree on the salaries of professionals. Suppose that the salaries of the professionals ​(Upper S Subscript i​) are not dependent on any other variables. Let Upper D Subscript i be a variable which takes the value 0 if an individual has not earned a​ Master's degree, and a value 1 if they have earned a​ Master's degree. What would be the regression model that the researcher wants to​ test?

. Upper S Subscript i equals beta 0 plus beta 1 Upper D Subscript i plus u Subscript i​, i equals 1 comma ... comma n.

Which of the following specifies a nonlinear regression that model this​ shape?

. Upper Y Subscript i Baseline equals beta 0 plus beta 1 Upper X Subscript i Baseline plus beta 2 Upper X Subscript i Superscript 2 Baseline plus u Subscript i.

Which of the following is the modified regression so that hypothesis testing can be carried out using the t​-statistic?

. Upper Y Subscript i Baseline equals beta 0 plus gamma 1 Edu Subscript 1 i Baseline plus beta 2 Upper W Subscript 1 i Baseline plus u Subscript i comma where gamma 1 equals beta 1 minus beta 2 and Upper W Subscript i Baseline equals Upper E du Subscript 1 i Baseline plus Exp Subscript 2 i Baseline nbsp.

A polynomial regression model is specified​ as:

. Upper Y Subscript i equals beta 0 plus beta 1 Upper X Subscript i plus beta 2 Upper X Subscript i Superscript 2 Baseline plusbulletbulletbulletplus beta Subscript r Baseline Upper X Subscript i Superscript r plus mu Subscript i.

In the model 1 n left parenthesis Upper Y Subscript i right parenthesis equals beta 0 plus beta 1 Upper X Subscript i plus mu Subscript i​, the elasticity of Upper E left parenthesis Upper Y vertical line Upper X right parenthesis with respect to X ​is:

. beta 1 Upper X.

Consider the regression TestScore Subscript i equals beta 0 plus beta 1 STRsmall Subscript i plus beta 2 STRlarge Subscript i plus u Subscript i What expected signs of beta 1 and beta 2 are consistent with the statement​ above?

. beta 1 ​> 0 and beta 2 ​< 0.

Which of the following statements is true for the components of the​ homoskedasticity-only t​-statistic testing the hypothesis Upper H 0 : beta 1 equals beta Subscript 1 comma 0 vs. Upper H 1 : beta 1 not equals beta Subscript 1 comma 0​?

. left parenthesis ModifyingAbove beta with caret 1 minus beta Subscript 1 comma 0 right parenthesis follows a standard normal distribution while ModifyingAbove sigma with tilde Subscript ModifyingAbove beta with caret 1 Superscript 2 follows a​ chi-squared distribution with nminus2 degrees of freedom.

A​ "Cobb-Douglas" production function relates production ​(Q​) to factors of​ production, capital ​(K​), labor ​(L​), raw materials ​(M​), and an error term u using the equation Upper Q equals lambda Upper K Superscript beta 1 Baseline Upper L Superscript beta 2 Baseline Upper M Superscript beta 3 Baseline e Superscript u​, where lambda​, beta 1​, beta 2​, and beta 3 are production parameters. Taking logarithms of both sides of the equation yields ln left parenthesis Upper Q right parenthesis equals beta 0 plus beta 1 ln left parenthesis Upper K right parenthesis plus beta 2 ln left parenthesis Upper L right parenthesis plus beta 3 ln left parenthesis Upper M right parenthesis plus u. Suppose that you thought that the value of beta 2 was not​ constant, but rather increased when K increased. Which of the following regression functions captures this dynamic​ relationship?

. ln left parenthesis Upper Q right parenthesis equals beta 0 plus beta 1 ln left parenthesis Upper K right parenthesis plus beta 2 ln left parenthesis Upper L right parenthesis plus beta 3 ln left parenthesis Upper M right parenthesis plus beta 4 left bracket ln left parenthesis Upper L right parenthesis times ln left parenthesis Upper K right parenthesis right bracket plus u.

Consider the following least squares specification between test scores and the​ student-teacher ratio: ModifyingAbove Test Score with caret equals 557.8 plus 36.42 1 n left parenthesis Income right parenthesis According to this​ equation, a​ 1% increase in income is associated with an increase in test scores​ of:

0.36 points.

Consider the following regression output where the dependent variable is test scores and the two explanatory variables are the​ student-teacher ratio and the percent of English​ learners: ModifyingAbove Test Score with caret equals 698.9 minus 1.10 times STR minus 0.650 times PctEL. You are told that the t​-statistic on the​ student-teacher ratio coefficient is 2.56. The standard error therefore is​ approximately:

0.43. (1.1/2.56)

A researcher is interested in the effect on test scores of computer usage. Using school district​ data, she regresses district average test scores on the number of computers per student. What are possible sources of bias for ModifyingAbove beta 1 with caret​, the estimated effect on tests scores of increasing the number of computers per​ student? For each source of bias​ below, determine whether ModifyingAbove beta 1 with caret will be biased up or down.

1. Average income per capita in the district. If this variable is​ omitted, it will likely produce​ a(an) upward bias of the estimated effect on tests scores of increasing the number of computers per student. 2. The availability of computerized adaptive learning tools in the district. If this variable is​ omitted, it will likely produce​ a(an) upward bias of the estimated effect on tests scores of increasing the number of computers per student. 3. The availability of​ computer-related leisure activities in the district. If this variable is​ omitted, it will likely produce​ a(an) downward bias of the estimated effect on tests scores of increasing the number of computers per student.

The critical value of Upper F Subscript 4 comma infinity at the​ 5% significance level​ is:

2.37

Assume that you had estimated the following quadratic regression​ model: ModifyingAbove Test Score with caret equals 607.3 plus 3.85 Income minus 0.0423 Income squared If income increased from 10 to 11​ ($10,000 to​ $11,000), then the predicted effect on test scores would​ be:

2.96.

You have estimated the relationship between test scores and the​ student-teacher ratio under the assumption of homoskedasticity of the error terms. The regression output is as​ follows: ModifyingAbove Test Score with caret equals 698.9 minus 2.28 times STR​, and the standard error on the slope is 0.48. The​ homoskedasticity-only "overall" regression F​-statistic for the hypothesis that the regression Upper R squared is zero is​ approximately:

22.56

Consider the estimated equation from your​ textbook: ModifyingAbove Test Score with caret equals 698.9 minus 2.28 times STR comma Upper R squared equals 0.051 comma SER equals 18.6Test Score=698.9−2.28×STR, R2=0.051, SER=18.6 ​(10.4) (0.52) The t​-statistic for the slope is​ approximately:

4.38.

In what fraction of the samples would the value beta 1 ​= 0 be included in the​ 95% confidence interval for beta 1​?

95% of the confidence intervals would contain the value beta 1 ​= 0.

Suppose n and k denote the sample size and number of regressors respectively. What​ degrees-of-freedom adjustment is required while calculating the standard error of the regression ​(SER​) from the sum of squared residuals ​(SSR​)?

A division by nminuskminus1.

A​ "Cobb-Douglas" production function relates production ​(Q​) to factors of​ production, capital ​(K​), labor ​(L​), raw materials ​(M​), and an error term u using the equation Upper Q equals lambda Upper K Superscript beta 1 Baseline Upper L Superscript beta 2 Baseline Upper M Superscript beta 3 Baseline e Superscript u​, where lambda​, beta 1​, beta 2​, and beta 3 are production parameters. Suppose that you have data on production and the factors of production from a random sample of firms with the same​ Cobb-Douglas production function. Which of the following regression functions provides the most useful transformation to estimate the​ model?

A logarithmic regression function.

Which of the following statements describes a way of determining the degree of the polynomial in X which best models a nonlinear​ regression? Let r denote the highest power of X that is included in the regression.

A way is to check if the coefficients in the regression equation associated with the largest values of r are equal to zero.

Which of the following statements best describe how the Upper R squared and the adjusted Upper R squared ​(Upper R overbar squared​) should be interpreted in​ practice? ​(Check all that apply.​)

An increase in the Upper R squared or Upper R overbar squared does not necessarily mean that an added variable is statistically significant. A high Upper R squared or Upper R overbar squared does not mean that there is no omitted variable bias.

The F​-statistic for omitting BDR and Age from the regression is F​ = 0.09. Are the coefficients on BDR and Age statistically different from zero at the 1​% ​level?

Because 0.09 is less than the critical​ value, the coefficients are not jointly significant at the 1​% level

Let Female be an indicator variable that is equal to 1 for females and 0 for males. A regression of the logarithm of earnings onto Female yields ModifyingAbove ln left parenthesis Earnings right parenthesis with caret equals 6.48 minus 0.44 Female comma SER equals 2.65 ​(0.01) ​ (0.05) The estimated coefficient on Female is​ -0.44. Explain what this value means.

Both A and C are correct. ​ln(Earnings​) for females​ are, on​ average, 0.44 lower than​ men's. Earnings for females​ are, on​ average, 44% lower than​ men's.

You have estimated a linear regression model relating Y to X. Your professor​ says, "I think that the relationship between Y and X is​ nonlinear." How would you test the adequacy of your linear​ regression? ​(Check all that apply​)

Compare the fit between of linear regression to the​ non-linear regression model. If adding a quadratic​ term, you could test the hypothesis that the estimated coefficient of the quadratic term is significantly different from zero.

Which of the following describes how to test the null hypothesis that beta 1 equals 0 and beta 2 equals 0​?

Compute the standard​ errors, the correlation between beta 1 and beta 2​, the F​-statistic, and the p​-value associated with the F​-statistic. Reject the null hypothesis if the p​-value is less than some relevant significance level.

Consider the following multiple regression model Upper Y Subscript i equals beta 0 plus beta 1 Upper X Subscript 1 i Baseline plus beta 2 Upper X Subscript 2 i Baseline plus u Subscript i Which of the following describes how to test the null hypotheses that either beta 1 equals 0 or beta 2 equals 0​?

Compute the standard​ errors, the t​-statistic and the p​-value for​ each, beta 1 and beta 2. Reject the null hypothesis if the p​-value is less than some relevant significance level.

What are the limitations of the​ Gauss-Markov theorem? ​(Check all that apply.​)

Even if the conditions of the theorem​ hold, it is possible that under some​ conditions, other estimators that are not linear or conditionally unbiased may be more efficient than OLS. If the error term is​ heteroskedastic, then the OLS estimator is not BLUE.

The coefficient on Female is nowminus0.28. Why has it changed from the first​ regression?

Female is correlated with the two new included variables.. MarketValue is important for explaining ​ln(Earnings​). The first regression suffered from omitted variable bias. All of the above.

Consider the following multiple regression model Upper Y equals beta 0 plus beta 1 Upper X 1 plus beta 2 Upper X 2 plus u Which of the following explains why two perfectly collinear regressors cannot be included in a linear multiple​ regressions? ​(Check all that apply​)

For the case of two regressors and​ homoskedasticity, it can be shown mathematically that the variance of the estimated coefficient ModifyingAbove beta 1 with caret goes to infinity as the correlation between Upper X 1 and Upper X 2 goes to one ​Intuitively, if one regressor is a linear function of​ another, OLS cannot identify the partial effect of one while holding the other constant.

Consider the following multiple regression model Upper Y equals beta 0 plus beta 1 Upper X 1 plus beta 2 Upper X 2 plus u Which of the following explains why it is difficult to estimate precisely the partial effect of Upper X 1​, holding Upper X 2 ​constant, if Upper X 1 and Upper X 2 are highly​ correlated?

For the case of two regressors and​ homoskedasticity, it can be shown mathematically that the variance of the estimated coefficient ModifyingAbove beta 1 with caret goes to infinity as the correlation between Upper X 1 and Upper X 2 goes to one. ​Intuitively, estimating the partial effect of Upper X 1 holding Upper X 2 constant becomes difficult because after controlling for Upper X 2​, there is little variation left to precisely estimate the partial effect of Upper X 1.

What is the effect on Upper R squared and SSR if the coefficient of the added regressor is exactly​ 0?

If the coefficient of the added regressor is exactly​ 0, the Upper R squared and SSR both do not change.

Suppose you have developed and estimated a base specification and a list of alternative specifications. Which of the following statements are​ true? ​(Check all that apply.​)

If the estimates of the coefficients of interest are numerically similar across the alternative​ specifications, then this provides evidence that the estimates from your base specification are reliable. If the estimates of the coefficients of interest change substantially across​ specifications, this provides evidence that the original specification had omitted variable bias and so might your alternative specifications.

Which of the following statements describe what the​ Gauss-Markov theorem​ states?

If the three least square assumptions hold and if errors are​ homoskedastic, then the OLS estimator of a given population parameter is the most efficient linear conditionally unbiased estimator.

Suppose that the errors in a regression model are heteroskedastic.

In large​ samples, the probability that a confidence interval constructed as plus or minus1.96 ​homoskedasticity-only standard errors contains the true value of the coefficient will not be ​95%.

Which of the following statements are true regarding the specifications given in the above​ table? ​(Check all that apply​.)

In the 3 Superscript rd ​specification, F​-statistic testing the hypothesis that the coefficients on Upper F squared and Upper F cubed are zero is rejected at the​ 1% significance level. The value of Upper R overbar squared in the 1 Superscript st specification suggests that the number of cows alone explains​ 42% of the variation in the production of milk.

Suppose the population multiple regression model is of the​ form: Upper Y Subscript i equals beta 0 Upper X Subscript 0 i Baseline plus beta 1 Upper X Subscript 1 i Baseline plus beta 2 Upper X Subscript 2 i Baseline plus u Subscript i comma where Upper X Subscript 0 iequals​1, i ​= ​1,....,n.

In the given​ model, beta 0 is the constant term and Upper X Subscript 0 i is the constant regressor.

How would you change the regression if you suspected that the effect of experience on earnings was different for men than for​ women?

Include interaction terms FemaletimesPotential experience and Femaletimes​(Potential experience​)squared.

Why is it impossible to compute OLS estimators in the presence of perfect​ multicollinearity?

It is impossible to compute OLS estimators in the presence of perfect multicollinearity because it produces division by 0.

Which of the following statements are true in describing the Bonferroni method of testing hypotheses on multiple​ coefficients? ​(Check all that apply​.)

It modifies the​ "one-at-a-time" method so that it uses different critical values that ensure that its size equals its significance level. Its advantage is that it applies very generally.

Assume a​ father's weight is correlated with his years of​ eduction, but is not a determinant of the​ child's years of formal education. Which of the following statements describes the consequences of omitting the​ father's weight from the above​ regression?

It will not result in omitted variable bias because the omitted​ variable, weight, is not a determinant of the dependent variable.

In which of the following cases would the weighted least squares estimator​ (WLS) or the least absolute deviations estimator​ (LAD) be preferred to the OLS​ estimator? ​(Check all that apply.​)

LAD is preferred to OLS if extreme outliers are not rare in the data. WLS is preffered to OLS if the errors are heteroskedastic.

Suppose you want to test the null hypothesis that beta 1 equals 0 and beta 2 equals 0. Is the result of the joint test implied by the result of the two separate​ tests?

No.

You are interested in beta 1​, the causal effect of Upper X 1 on Y. Suppose that Upper X 1 and Upper X 2 are uncorrelated. You estimate beta 1 by regressing Upper X 1 ​(so that Upper X 2 is not included in the​ regression). Does this estimator suffer from omitted variable​ bias?

No.

Supposed you learned that Upper Y Subscript i and Upper X Subscript i were independent. Would you be​ surprised?

No​, I wouldn't be surprised because the null hypothesis that beta 1 is zero was not rejected at the​ 5% significance level.

Which of the following statements correctly describes the omitted variable​ bias?

Omitted variable bias arises when the omitted variable is correlated with a regressor and is a determinant of the dependent variable.

Suppose the population regression function is given​ by: Upper Y Subscript i Baseline equals beta 0 plus beta 1 Upper X Subscript 1 i Baseline plus beta 2 Upper X Subscript 2 i Baseline plus ... plus beta Subscript k Upper X Subscript ki plus u Subscript i Baseline comma i equals 1 comma ... comma n comma where u Subscript i is the error term and beta 1 comma ... comma beta Subscript k​, are the coefficients on the X​'s. Which of the following statements is not true about the omitted variable​ bias?

Omitted variable bias violates the assumption that Upper X Subscript 1 i Baseline comma ... comma Upper X Subscript ki are i.i.d.

In which of the following scenarios does perfect multicollinearity​ occur?

Perfect multicollinearity occurs when one of the regressors is a perfect linear function of the other regressors.

Is the coefficient on BDR statistically significantly different from​ zero?

Since the t​-statistic ​< 1.96, the coefficient on BDR is not statistically significantly different from zero.

Imagine that you were told that the t​-statistic for the slope coefficient of the regression line ModifyingAbove Test Score with caret equals 698.9 minus 2.28 times STR was 4.38. What are the units of measurement for the t​-statistic?

Standard deviations.

Are the OLS estimators likely to be biased and​ inconsistent?

The OLS estimators are likely biased and inconsistent because there are omitted variables correlated with parking lot area per pupil that also explain test​ scores, such as ability.

Which of the following statements describes the mathematical implications of​ heteroskedasticity?

The OLS estimators remain​ unbiased, consistent, asymptotically​ normal, but do not necessarily have the least variance among all estimators that are linear in Upper Y 1 comma .... comma Upper Y Subscript n conditional on Upper X 1 comma ... comma Upper X Subscript n.

The SER is 2.65. Explain what this value means.

The error term has a standard deviation of 2.65​ (measured in​ log-points).

Consider the following regression​ equation: Upper Y Subscript i Baseline equals beta 0 plus beta 1 Upper X Subscript i Baseline plus mu Subscript i​, where Upper X Subscript i​, Upper Y Subscript i​, beta 0​, beta 1​, and mu Subscript i denote the​ regressor, the​ regressand, the intercept​ coefficient, the slope​ coefficient, and the error term for the i Superscript th ​observation, respectively. When would the error term be​ homoskedastic?

The error term is homoskedastic if the variance of the conditional distribution of mu Subscript i given Upper X Subscript i is constant for i equals 1. ... n​, and in particular does not depend on Upper X Subscript i.

If the error term is​ homoskedastic, the F​-statistic can be written in terms of the improvement in the fit of the regression. How can we measure this improvement in​ fit? ​(Check all that apply​.)

The improvement in the fit of the regression can be measured by the decrease in sum of squared residuals ​(SSR​) The improvement in the fit of the regression can be measured by the increase in the regression Upper R squared.

Which of the following statements is​ correct?

The larger the correlation between Upper X 1 and Upper X 2​, the larger the variance of ModifyingAbove beta with caret 1. ​Nevertheless, it is best include Upper X 2 in the regression if it is a determinant of Y.

Suppose that Upper Y Subscript i and Upper X Subscript i are independent and many samples of size n​ = 495 are drawn and regressions estimated. Suppose that you test the null hypothesis that beta 1 is zero at the​ 5% level and construct a​ 95% confidence interval for beta 1. In what fraction of the samples would the null hypothesis that beta 1 is zero at the​ 5% level be​ rejected?

The null hypothesis would be rejected in​ 5% of the samples.

Which of the following statements correctly describe the reasons behind the differences observed in the coefficients in the given​ specifications? ​(Check all that apply​.)

The number of CCTV cameras installed in the district appears to be redundant. As reported in regression​ (4), adding it to regression​ (2) has a negligible effect on the estimated coefficients on LEOP and DTP or their standard errors. The significant rise in the coefficient on LEOP from the 1 Superscript st specification to the 4 Superscript th shows the presence of omitted variable bias in the 1 Superscript st specification.

Which of the following statements are true about the explanatory variables used in this​ study? ​(Check all that apply.​)

The regression does not fall into the dummy variable trap due to the absence of the intercept term. The estimated coefficients will be jointly normally distributed.

Why are the answers to Scenario A and Scenario B ​different?

The regression is nonlinear in experience.

Suppose that crime rate is positively affected by the fraction of young males in the​ population, and that counties with high crime rates tend to hire more police. Use the following expression for omitted variable bias to determine whether the regression will likely overdash or underestimate the effect of police on the crime rate. ModifyingAbove beta with caret 1 right arrow Subscript p Baseline beta 1 plus rho Subscript Xu StartFraction sigma Subscript u Over sigma Subscript Upper X EndFraction

The regression will likely overestimate beta 1. That​ is, ModifyingAbove beta 1 with caret is likely to be larger than beta 1.

Why is the regressor West omitted from the​ regression? What would happen if it was​ included?

The regressor West is omitted to avoid perfect multicollinearity. If West is​ included, then the OLS estimator cannot be computed in this situation.

Which of the following economic relationships may exhibit a shape like​ this? ​(Check all that apply​)

The relationship between income and fertility. The relationship between wage earnings and years of experience. The relationship between time spent studying for an exam and grade for such exam.

Using 143​ observations, assume that you had estimated a simple regression function and that your estimate for the slope was​ 0.04, with a standard error of 0.01. You want to test whether or not the estimate is statistically significant. Which of the following decisions is the only correct​ one?

The slope is statistically significant since it is four standard errors away from zero.

Which of the following statements is NOT true about the Upper R overbar squared​? ​(Check all that apply​.)

The value of Upper R overbar squared increases when a new regressor is added to the regression equation. The value of Upper R overbar squared always lies between 0 and 1.

A researcher plans to study the causal effect of police crime using data from a random sample of U.S. counties. He plans to regress the​ county's crime rate on the​ (per capita) size of the​ country's police force. Why is this regression likely to suffer from omitted variable​ bias?

There are other important determinants of a​ country's crime​ rate, including demographic characteristics of the​ population, that if left out of the regression would bias the estimated partial effect of the​ (per capita) size of the​ county's police force.

A researcher tries to estimate the regression TestScore Subscript i equals beta 0 plus beta 1 STRsmall Subscript i plus beta 2 STRmoderate Subscript i Baseline plus beta 3 STRlarge Subscript i plus u Subscript i and finds that her computer crashes.​ Why?

There is perfect multicollinearity between the regressors.

Suppose that a set of control variables do not satisfy the conditional mean independence​ condition, Upper E left parenthesis u Subscript i Baseline vertial line Upper X Subscript i comma Upper W Subscript i right parenthesis equals Upper E left parenthesis u Subscript i vertial line Upper W Subscript i right parenthesis​, where Upper X Subscript i denotes the variable or variables of interest and Upper W Subscript i denotes one or more control variables. If the OLS estimators are jointly normally distributed and each ModifyingAbove beta with caret Subscript j is distributed Upper N left parenthesis beta Subscript j Baseline comma sigma Subscript ModifyingAbove beta with caret Subscript j Superscript 2 Baseline right parenthesis​, j equals 0​,..., ​k, then which of the following statements holds true in this​ case?

There will remain omitted determinants of Y that are correlated with X​, even after holding W​ constant, and the result is omitted variable bias.

Suppose the population regression is of the​ form: Upper Y Subscript i equals beta 0 plus beta 1 Upper X Subscript 1 i Baseline plus beta 2 Upper X Subscript 2 i Baseline plus u Subscript i.

The​ 95% confidence set for the coefficients beta 1 and beta 2 will be an ellipse which contains the pairs of values of beta 1 and beta 2 that cannot be rejected using the F​-statistic at the​ 5% significance level.

In a multiple​ regression, the interaction term between its two independent variables Upper X 1 and Upper X 2 is their product Upper X 1timesUpper X 2. The coefficient on ​(Upper X 1timesUpper X 2​) is the effect of a​ one-unit increase in the product of Upper X 1 and Upper X 2​, above and beyond the sum of the individual effects of a unit increase in Upper X 1 alone and a unit increase in Upper X 2 alone.

This holds true whether Upper X 1 and divided by or Upper X 2 are continuous or binary.

Consider the following regression​ equation: Upper Y Subscript i equals beta 0 plus beta 1 Upper X Subscript i plus beta 2 left parenthesis Upper X Subscript i times Upper D Subscript i right parenthesis plus u Subscript i​, where beta 0​, beta 1​, beta 2​, and u Subscript i are the​ intercept, the slope coefficient on Upper X Subscript i​, the coefficient on the interaction term which is the product of ​(Upper X Subscript i timesUpper D Subscript i​), where Upper D Subscript i is the binary variable respectively.

This regression equation has a different slope and the same intercept for the two values of the binary variable.

If the errors are​ heteroskedastic, then:

WLS is BLUE if the conditional variance of the errors is known up to a constant factor of proportionality.

Suppose you are interested in investigating the wage gender gap using data on earnings of men and women. Which of the following models best serves this​ purpose?

Wage equals beta 0 plus beta 1 Female plus u​, where Female ​(=1 if​ female) is an indicator variable and u the error term.

Consider the regression model Wage equals beta 0 plus beta 1 Female plus u Where Female ​(=1 if​ female) is an indicator variable and u the error term. Identify the dependent and independent variables in the regression model above.

Wage is the dependent variable, while Female is the independent variable.

Which of the following statements is​ true?

We would reject the null hypothesis if the sum of squared residuals ​(SSR​) from the unrestricted regression is sufficiently smaller than that from the restricted regression.

Does this imply that age is an important determinant of​ earnings?

Yes, age is an important determinant of earnings because the low p​-value implies that the coefficient on age is statistically significant at the​ 1% level.

All of the following are true with the exception of one​ condition:

a high Upper R squared or Upper R overbar squared always means that an added variable is statistically significant.

Consider the population regression of log earnings ​[Upper Y Subscript i​, where Upper Y Subscript i ​= ​ln(Earningsi​)] against two binary​ variables: whether a worker is married ​(Upper D Subscript 1 i​, where Upper D Subscript 1 i ​= 1 if the ith person is​ married) and the​ worker's gender ​(Upper D Subscript 2 i​, where Upper D Subscript 2 i ​= 1 if the ith person is​ female), and the product of the two binary variables Upper Y Subscript i equals beta 0 plus beta 1 Upper D Subscript 1 i Baseline plus beta 2 Upper D Subscript 2 i Baseline plus beta 3 left parenthesis Upper D Subscript 1 i Baseline times Upper D Subscript 2 i Baseline right parenthesis plus mu Subscript i. The interaction​ term:

allows the population effect on log earnings of being married to depend on gender.

Under the least squares assumptions for the multiple regression problem​ (zero conditional mean for the error​ term, all Upper X Subscript i and Upper Y Subscript i being​ i.i.d., all Upper X Subscript i and mu Subscript i having finite fourth​ moments, no perfect​ multicollinearity), the OLS estimators for the slopes and​ intercept:

are unbiased and consistent.

The interpretation of the slope coefficient in the model 1 n left parenthesis Upper Y Subscript i right parenthesis equals beta 0 plus beta 1 1 n left parenthesis Upper X Subscript i right parenthesis plus mu Subscript i is as​ follows:

a​ 1% change in X is associated with a beta 1​% change in Y.

In the multiple regression​ model, the t​-statistic for testing that the slope is significantly different from zero is​ calculated:

by dividing the estimate by its standard error.

If you had a​ two-regressor regression​ model, then omitting one variable that is​ relevant:

can result in a negative value for the coefficient of the included​ variable, even though the coefficient will have a significant positive effect on Y if the omitted variable were included.

The​ homoskedasticity-only F​-statistic and the​ heteroskedasticity-robust F​-statistic typically​ are:

different

A binary variable is often called​ a:

dummy variable.

When there are two​ coefficients, the resulting confidence sets​ are:

ellipses.

Consider the multiple regression model with two regressors X1 and X2​, where both variables are determinants of the dependent variable. When omitting X2 from the​ regression, there will be omitted variable bias for ModifyingAbove beta with caret 1​:

if Upper X 1 and Upper X 2 are correlated.

Imperfect​ multicollinearity:

implies that it will be difficult to estimate precisely one or more of the partial effects using the data at hand.

A nonlinear​ function:

is a function with a slope that is not constant.

The​ 95% confidence interval for beta 1β1 is the​ interval:

left parenthesis ModifyingAbove beta with caret 1 minus 1.96 SE left parenthesis ModifyingAbove beta with caret 1 right parenthesis comma ModifyingAbove beta with caret 1 plus 1.96 SE left parenthesis ModifyingAbove beta with caret 1 right parenthesis right parenthesis

Imperfect​ multicollinearity:

means that two or more of the regressors are highly correlated.

Consider the multiple regression model with two regressors Upper X 1 and Upper X 2​, where both variables are determinants of the dependent variable. You first regress Y on Upper X 1 only and find no relationship.​ However, when regressing Y on Upper X 1 and Upper X 2​, the slope coefficient ModifyingAbove beta with caret 1 changes by a large amount. This suggests that your first regression suffers​ from:

omitted variable bias.

The dummy variable trap is an example​ of:

perfect multicollinearity.

The best way to interpret polynomial regressions is​ to:

plot the estimated regression function and to calculate the estimated effect on Y associated with a change in X for one or more values of X.

If you wanted to​ test, using a​ 5% significance​ level, whether or not a specific slope coefficient is equal to​ one, then you​ should:

subtract 1 from the estimated​ coefficient, divide the difference by the standard​ error, and check if the resulting ratio is larger than 1.96.

Perfect multicollinearity can be rectified by modifying

the independent variables .

In the​ log-log model, the slope coefficient​ indicates:

the elasticity of Y with respect to X.

The ​t-statistic is calculated by​ dividing:

the estimator minus its hypothesized value by the standard error of the estimator.

If the errors are​ heteroskedastic,

then the t​-statistic computed using homoskedasticity-only standard error does not have a standard normal​ distribution, even in large samples.

When testing a joint​ hypothesis, you​ should:

use the F​-statistics and reject at least one of the hypotheses if the statistic exceeds the critical value.

Using the textbook example of 420 California school districts and the regression of test scores on the​ student-teacher ratio, you find that the standard error on the slope coefficient is 0.51 when using the​ heteroskedasticity-robust formula, while it is 0.48 when employing the​ homoskedasticity-only formula. When calculating the t​-statistic, the recommended procedure is​ to:

use the​ heteroskedasticity-robust formula.

This formula illustrates that

we cannot obtain an unbiased estimate of Y by taking the exponential function of beta 0 plus beta 1 Upper X Subscript i.

In the multiple regression​ model, the adjusted Upper R squared​, Upper R overbar squared​:

will never be greater than the regression Upper R squaredR2.

Suppose you run a regression of test scores against parking lot area per pupil. Is the Upper R squared likely to be high or​ low?

​High, because parking lot area is correlated with studentdashteacher ​ratio, with whether the school is in a suburb or a​ city, and possibly with district income.

Lot size is measured in square feet. Do you think that measuring lot size in thousands of square feet might be more​ appropriate?

​Yes, because small differences in square footage between two houses is not likely to have a significant effect on differences in house prices.

Lot size is measured in square feet. Do you think that another scale might be more​ appropriate?

​Yes, if the lot size were measured in thousands of square​ feet, the estimate coefficient would be 2 instead of 0.002​, thus making the regression results easy to read and interpret.

You extract approximately​ 5,000 observations from the Current Population Survey​ (CPS) and estimate the following regression​ function: ModifyingAbove AHE with caret equals 3.32 minus 0.45 times Age comma Upper R squared equals 0.02 comma SER equals 8.66 ​(1.00) (0.04) where AHE is average hourly​ earnings, and Age is the​ individual's age. Given the​ specification, your​ 95% confidence interval for the effect of changing age by 5 years is​ approximately:

​[$1.86, $2.64]. (MULTIPLY THE MEAN AND THE BOUNDS

You have collected data for the 50 U.S. states and estimated the following relationship between the change in the unemployment rate from the previous year ModifyingAbove left parenthesis triangle symbol ur right parenthesis with caret and the growth rate of the respective state real GDP ​(g Subscript y​). The results are as​ follows: ModifyingAbove triangle symbol ur with caret equals 2.81 minus 0.23 times g Subscript y comma Upper R squared equals 0.36 comma SER equals 0.78 ​ (0.12) (0.04) Assuming that the estimator has a normal​ distribution, the​ 95% confidence interval for the slope is approximately the​ interval:

​[minus​0.31, minus​0.15].


Ensembles d'études connexes

A&P Chapter 7 - The Axial Skeleton (lecture notes)

View Set

Ch. 15 Oncology: Nursing Management in Cancer Care

View Set

IB Business Management Marketing 4.6 The Extended Marketing Mix - Seven Ps Model

View Set

Connect and Protect: Networks and Network Security

View Set

ATI Pain and Inflammation (Exam 3)

View Set

MGT 3210 Midterm Exam Study Guide

View Set

Business Law - Chapter 34 (Personal Property and Bailments)

View Set

LIFE ONLY_Chapter 5-Policy Provisions, Riders and Options

View Set