Simple Linear Regression
Confidence Interval for β₁
we can use a 95% confidence interval for β₁ to test the hypotheses just used in the t test H₀ is rejected if the hypothesized value of β₁ is not included in the confidence interval for β₁ the form of confidence interval for β₁ is (see picture) where t(α/2) is the is the t value providing an area of α/2 in the upper tail of a t distribution with n - 2 degrees of freedom b₁ is the point estimator t(α/2) s(b₁) is the margin of error rejection rule reject H₀ if 0 is not included in the confidence interval for β₁
Least Squares Criterion
where yᵢ = observed value of the dependent variable for the ith observation ŷᵢ = estimated value of the dependent variable for the ith observation min Σ (yᵢ - ŷᵢ)²
Simple Linear Regression Model
y = β₀ + β₁ x + ε where β₀ and β₁ are called parameters of the model and ε is a random variable called the error term
Prediction Interval for yₚ
yₚ ± t(α/2) s(ind)
Estimated Simple Linear Regression Equation
ŷ = b₀ + b₁ x the graph is called the estimated regression line b₀ is the y intercept of the line b₁ is the slope of the line ŷ is the estimated value of y for a given x value
Point Estimation
ŷₚ = b₀ + b₁ x
Confidence Interval for E(yₚ)
ŷₚ ± t(α/2) s(ŷₚ)
Standardized Residual Plot
the standardized residual plot can provide insight about the assumption that the error term ε has a normal distribution if this assumption is satisfied, the distribution of the standardized residuals should appear to come from a standard normal probability distribution if all the standardized residuals are between -1.5 and +1.5 indicating , then there is no reason to question the assumption that ε has a normal distribution
Testing for Significance
to test for a significant regression relationship, we must conduct a hypothesis test to determine whether the value of β₁ is zero because if β₁ is zero, we would conclude that the two variables are not related also, β₁ is not zero the variables are related two tests are commonly used: t test and F test both the t test and F test require an estimate of σ², the variance of ε in the regression model
Testing for Significance: F Test Procedure
1. determine the hypotheses H₀: β₁ = 0 Hₐ: β₁ ≠ 0 2. specify the level of significance α = .05 3. select the test statistic F = MSR / MSE 4. state the rejection rule reject H₀ if p-value ≤ α or F ≥ F(α) where F(α) is based on an F distribution with 1 degree of freedom in the numerator and n - 2 degrees of freedom in denominator 5. compute the value of the test statistic F = MSR / MSE 6. determine whether to reject H₀
Testing for Significance: t Test Procedure
1. determine the hypotheses H₀: β₁ = 0 Hₐ: β₁ ≠ 0 2. specify the level of significance α = .05 3. select the test statistic t = b₁ / s(b₁) 4. state the rejection rule reject H₀ if p-value ≤ .05 or |t| > t(α/2) 5. compute the value of the test statistic t = b₁ / s(b₁) 6. determine whether to reject H₀
Regression Model
the equation that describes how y is related to x and an error term
Least Squares Method
the least squares method is a procedure for using sample data to find the estimated regression equation
Simple Linear Regression Equation
E(y) = β₀ + β₁ x graph of the regression equation is a straight line β₀ is the y intercept of the regression line β₁ is the slope of the regression line E(y) is the expected value of y for a given x value
Multiple Regression
real-life decisions often are based on the relationship between two or more variables regression analysis involving two or more independent variables is called multiple regression
an Estimate of σ²
the mean square error (MSE) provides the estimate of σ², and the notation s² is also used s² = MSE = SSE / (n - 2_ where SSE = sum of squares due to error
Relationship among SST, SSR, SSE
SST = SSR + SSE where SST = total sum of squares SSR = sum of squares due to regression SSE = sum of squares due to error
Simple Linear Regression
regression analysis can be used to develop an equation showing how the variables are related the variable being predicted is called the dependent variable and is denoted by y the variables being used to predict the value of the dependent variable are called the independent variables are denoted by x variation in a variable is explained by another variable SLR involve one independent variable and one dependent variable the relationship between the two variables is approximated by a straight line regression analysis involving two or more independent variables is called multiple regression
Some Cautions about the Interpretation of Significance Tests
rejecting H₀: β₁ = 0 and concluding that the relationship between x and y is significant does not enable us to conclude that a cause-and-effect relationship is present between x and y just because we were able to reject H₀: β₁ = 0 and demonstrate statistical significance does not enable us to conclude that there is a linear relationship between x and y
Correlation Coefficient
a descriptive measure of the strength of a linear equation between two variables x and y values of the correlation coefficient are always between -1 (negative or inverse relation) and +1 (positive relation) zero (0), or close to zero, indicates no relationship rₓ ᵧ = (sign of b₁) √(coefficient of determination) rₓ ᵧ = (sign of b₁) √(r²) where b₁ = the slope of the estimated regression equation ŷ = b₀ + b₁ x
Outliers and Influential Observations - Detecting Outliers
an outlier is an observation that is unusual in comparison with the other data Minitab classifies an observation as an outlier if its standardized residual value is < -2 or > +2 this standardized residual rule sometimes fails to identify an unusually large observation as being an outlier this rule's shortcoming can be circumvented by using studentized deleted residuals the | ith studentized deleted residual | will be larger than the | ith standardized residual |
Slope for the Estimated Regression Equation
calculated using differential calculus aid is (see picture) where xᵢ = value of independent variable for ith observation yᵢ = value of dependent variable for ith observation x̅ = mean value for independent variable ȳ = mean value for dependent variable
Coefficient of Determination
how well does the estimated regression equation fit the data? the coefficient of determination provides a measure of goodness of fit for the estimated regression equation SSE is the sum of squares to error sums the residuals or errors r² = SSR / SST
Testing for Significance: F Test
hypotheses H₀: β₁ = 0 Hₐ: β₁ ≠ 0 test statistic F = MSR / MSE rejection rule reject H₀ if p-value ≤ α or F ≥ F(α) where F(α) is based on an F distribution with 1 degree of freedom in the numerator and n - 2 degrees of freedom in denominator
Testing for Significance: t Test
hypotheses: H₀: β₁ = 0 Hₐ: β₁ ≠ 0 rejection rule: reject H₀ if p-value ≤ α or t ≤ -t(α/2) or t ≥ t(α/2) where t(α/2) is based on a t distribution with n - 2 degrees of freedom
Residual Plot against x
if the assumption that the variance of ε is the same for all values of x is valid, and the assumed regression model is an adequate representation of the relationship between the variables, then the residual plot should give an overall impression of a horizontal band of points
Residual Analysis
if the assumptions about the error term ε appear questionable, the hypothesis tests about the significance of the regression relationship and the interval estimation results may not be valid the residuals provide the best information about ε residual for observation i yᵢ - ŷᵢ much of the residual analysis is based on an examination of graphical plots
