EX 1 CH 9 QUIZ YOURSELF

¡Supera tus tareas y exámenes ahora con Quizwiz!

d. The standard deviation of the estimation errors, also known as the standard error, Se.

A measure of the accuracy of the prediction obtained from a regression model is given by: a. R2. b. The residual values. c. The amount of scatter in the actual data around the independent variables. d. The standard deviation of the estimation errors, also known as the standard error, Se.

c. It accounts for the number of independent variables included in a regression model.

Adjusted R2 statistic is recommended in multiple regression because: a. It produces values that are larger than unadjusted R2. b. It produces values that are smaller than unadjusted R2. c. It accounts for the number of independent variables included in a regression model. d. It corrects for additional model variability.

a. ε

An element of unsystematic or random variation in the dependent variable is expressed by ___________ in equation f(X1, X2, ..., Xk) + ε. a. ε b. Y c. X2 d. X1

c. The regression is significant at 𝛼=0.05 significance level.

Consider a model Ai = 𝛽0 + 𝛽1X1i + 𝜀i. You estimated the fitted line as Fi = b0 + b1X1i want to test whether the regression is significant using 𝛼=0.05 significance level and found the appropriate critical value of the t-statistic, t*=1.645. The value of the t-statistic found in the regression output, t(b1)=4.345. You may conclude that: a. Additional estimation of regression parameters is needed. b. The determination of regression significance cannot be made based on provided information. c. The regression is significant at 𝛼=0.05 significance level. d. The regression is not significant at 𝛼=0.05 significance level.

c. Sample statistics.

In a simple linear regression model, the estimated values of 𝛽0 and 𝛽1 are denoted b0 and b1, respectively. The values of b0 and b1 are called: a. Population random variables. b. Y-intercept and slope of the line representing a linear relationship between Y and X in a population. c. Sample statistics. d. Population parameters.

c. Regression.

In addition to Solver, Excel provides another tool for solving regression problems that is easier to use and provides more information about a regression problem. This code is available in Excel under: a. Chart. b. Data. c. Regression. d. PivotTable.

c. There can be more than one independent variable and all terms in the regression model must be linear.

In multiple linear regression models: a. There can be only one independent variable. b. There can be more than one dependent variable. c. There can be more than one independent variable and all terms in the regression model must be linear. d. There can be quadratic and interaction terms on the RHS of the regression equation.

a. Population parameters.

In the equation: Yi = 𝛽0 + 𝛽1X1 i + 𝜀i, 𝛽0 and 𝛽1 are known as: a. Population parameters. b. Y-intercept and slope of the fitted line. c. Sample statistics. d. Random variables.

c. A random disturbance, or error.

In the model: Y = f(X1, X2, ..., Xk) +ε, the term ɛ is called: a. Residual. b. Actual value. c. A random disturbance, or error. d. Fitted value.

a. The independent variables in a regression model are correlated among themselves.

Multicollinearity is the term used to describe the situation when: a. The independent variables in a regression model are correlated among themselves. b. There are nonlinear terms in the regression model. c. There is a positive correlation between the dependent variable, Y, and at least one of the independent variables, Xi. d. The independent variables in a regression model are normally distributed.

d. All of the above are advantages of constructing a prediction interval.

One advantage of calculating a confidence interval for a prediction, or prediction interval, of a new value of Y for a given value of X is: a. That a prediction interval provides a lower bound on the fitted point value. b. That a prediction interval provides an upper bound on the fitted point value. c. That a prediction interval provides a confidence level (e.g. 95%) associated with the lower and upper bounds on the fitted point value. d. All of the above are advantages of constructing a prediction interval.

b. The method of least squares.

Regression analysis finds the values of the parameter estimates that minimize the sum of squared estimation errors. This approach is referred to as: a. The satisficing method. b. The method of least squares. c. The MAD method. d. The "best fit" method.

c. A modeling technique for analyzing the relationship between a continuous (real-valued) dependent variable Y and one or more independent variables X1, X2, ...Xn.

Regression analysis is: a. Synonymous with forecasting. b. A tool for generating scatter plots. c. A modeling technique for analyzing the relationship between a continuous (real-valued) dependent variable Y and one or more independent variables X1, X2, ...Xn. d. A special case of LP.

d. All of the above are correct.

The "R Square" statistic in regression: a. Provides a goodness-of-fit measure. b. Is referred to as the coefficient of determination. c. Ranges in value from 0 to 1. d. All of the above are correct.

a. The strength of the linear relationship between actual and estimated values for the dependent variable.

The "multiple R" statistic of the regression output represents: a. The strength of the linear relationship between actual and estimated values for the dependent variable. b. The ratio of ESS to TSS. c. The strength of a linear association between independent variables in the regression model. d. The percentage of variability in Y explained by the regression model.

a. Calculate the estimated values for linear regression model.

The TREND( ) function in Excel can be used to: a. Calculate the estimated values for linear regression model. b. Estimate the regression equation parameters only when the inputs to the function remain constant. c. Calculate the estimated values for nonlinear regression models. d. Provide the statistical information for significance tests.

b. A regression model is significant overall.

The analysis of variance (ANOVA) provides an efficient way to statistically test whether: a. There is too much variability in the regression model. b. A regression model is significant overall. c. There is too little variability in the regression. d. A specific independent variable is statistically significant.

d. The estimation error, or residual, for observation i.

The difference (Yi - ) is referred to as: a. Fitted value. b. The error term. c. Actual value. d. The estimation error, or residual, for observation i.

d. The amount of variation in Y around its mean that the regression function can account for.

The regression sum of squares (RSS) represents: a. The amount of variation in the residuals that is explained by the regression function. b. The amount of variation in the independent variable that is not explained by the regression function. c. The amount of variation in the dependent variable that is not explained by the regression function. d. The amount of variation in Y around its mean that the regression function can account for.

d. An unconstrained nonlinear optimization problem.

When using Solver to estimate the parameters of a simple linear regression model, the objective function is to minimize ESS = with no constraints. This is an example of: a. The least squares method. b. LP. c. Bounded optimization. d. An unconstrained nonlinear optimization problem.


Conjuntos de estudio relacionados

Light Independent Reactions in Photosynthesis

View Set

INT ACC 1 - Chapter 7 CPA/CMA Q's

View Set

Unit 6 Financial Goals/Objectives maat

View Set

HW #1 Econ 202 questions (exam 1)

View Set

Management and Organization Test 2 Review

View Set

Chapter 26 Kidney Disorders and Therapeutic Management

View Set