Regression Exam 3

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

27. Consider a 2 x 3 ANOVA with 2 levels of A (low, high) and 3 levels of B (low, moderate, high). The design matrix uses five code variables as predictors of the dependent variable. Using a series of regression equations, explain how you would find SSA, SSB and SSAB. What are the degrees of freedom for each effect?

...

77. Explain how to find an appropriate "standardized solution" in regression with interactions. Why is the standardized solution provided by SAS or SPSS that accompanies the solution with the centered predictors an inappropriate standardized solution?

1. Compute z score transformations (standardize) X and Z. 2. Compute crossproduct of 2 zscores by multiplying together to form interaction. 3. Compute z-score of y 4. run regression analysis with standardized y being predicted from standardized x, standardized z, and the crossproduct To report the correct standardized solution, use the unstandardized b coefficients from this analysis as the standardized solution. Correct approach is to form z scores for each predictor and take the product of thosez scores to obtain the interaction predictor. Incorrect approach is to form crossproduct first and then standardize crossproduct. The solution given by SPSS/SAS and the correct standardized solution (above) are the same only when X and Z are uncorrelated (which is rare, so its best to calculate yourself!).

58. What are possible solutions when you detect an outlier?

1. Drop the subject(s) from the analysis. Note that subject(s) were dropped in yout writeup and justify the reasons they were dropped. (More in handout). 2. Recontact the subject (or find another data source) and reassess the problematic variables if there is little reason to believe that the value of the data may have changed. (More in handout). 3. Drop the questionable measures for the subject; retain the other variables. Use a modern missing data approach (e.g. multiple imputation; Full information maximum likelihood) to correct the data set for the values of the questionable variables. 4. Use a method of estimating the regression equation that gives less weight to extreme scores than OLS regression. These methods use other loss functions than OLS—for example, least absolute deviations minimizes SS½Y-Yhat½ , the absolute value of the difference. 5. a. Respecify the regression equation by adding another predictor (e.g., the BAC level in the depression and reaction time example). b. Respecify the regression equation by adding a curvilinear effect or interaction that accounts for the outliers. 6. Transform the data. Points that are outliers in the original metric may not be outliers in another metric; e.g. log transformation, power transformation. 7. Analyze the data (a) with the outliers in and (b) with them deleted from the analyses. Report both sets of results to bracket the effects. (More in handout).

20. What are the three rules for choosing contrast codes to maximize intepretability?

1. The sum of the weights for each code variable must equal zero (required) 2. The sum of the product of the weights for each pair of code variables must equal zero (required. *If the above two rules are satisfied and ng is the same in each group, then the contrast codes are orthogonal- a desirable outcome. 3. If there are only positive weights with the same positive value and negative weights with the same negative value, the difference in the value of the set of positive weights and negative weights should equal one (this is optional, but maximizes interpretability) (can be achieved by rescaling). *The specific contrasts chosen should reflect the researchers hypotheses of interest.

68. A traditional strategy in data analysis with continuous variables was to dichotomize the variables and analyze them in a 2 x 2 ANOVA. What are two problems with this strategy?

2 problems: 1) If X and Z are correlated, then spurious MAIN effects may occur. The median splits lose information (introduce unreliability) 2) The statistical power of the test of the interaction is sharply reduced.

30. What is a within class regression line in the ANCOVA? What assumption is made about within class regression lines in analysis of covariance?

A within class regression line is a plot of the regression of the criterion on the covariate in each of the treatment conditions. The key assumption is that there is no interaction between the covariate and the treatment variable (i.e. lines are parallel and the regression coefficients are the same). If the assumption is met, the treatment effect is constant over all values of X.

9. Suppose you run a one factor analysis of variance on a data set like that in the DATA LAYOUT above, Q .3. What will be the relationship of the resulting ANOVA summary table to a regression analysis in which a set of dummy codes are used to code the four groups, and the criterion is the same as the dependent variable in the regression analysis? Explain the relationship of hypothesis for the overall F test in ANOVA and the F test from this corresponding regression analysis.

A) With any coding scheme (dummy, unweighted/weighted effects, contrast or even nonsense codes....The R-squared multiple, the overall F-test in ANOReg an the p-value are identical across coding schemes. The one-way ANOVA produces identical results. Therefore SSregression=SSgroups; degrees of freedom reg= dfANOVA because in ANOReg our null hypothesis is that the R-squared multiple is equal to zero in the population. In ANOVA our null hypothesis is that the means of each group are equal to each in the population. handout

1. Suppose you have a categorical variable with four groups, e.g. four regions of the country. You wish to use it as a predictor in a regression analysis. What is the general strategy for employing a categorical variable as a predictor in a regression analysis.

Adopt a coding scheme! Which one you choose depends on the hypothesis of interest. • Dummy Codes • Effects Codes (Weighted or Unweighted) • Contrast Codes:

60D. Explain this statement: "In the quadratic regression equation: = b1(X - ) + b2(X - )2 + b0, if the b2 coefficient is nonzero, then the regression of Y on X depends on the value of X". Use a figure to illustrate your answer.

As per the curvilinear relationships handout, "when you correlate a variable with a function of that same variable, part of the correlation depends on the scaling of the variable." Put in layman's terms... this just means... because the relationship between variable1 and variable2 is not constant but rather is dependent on the value of variable1, the outcome of the regression equation will depend on the value of x. (The plot below illustrates that the relationship between (x-xbar) and (x-xbar)2 is not constant because different values of (x-xbar) yield different values of (x-xbar)2).Finally, his qualification about the b2 coefficient being non-zero was only important because if b2=0, then there would be no (x-xbar)2 term, and none of this would apply. SEE handout for plot

70. Assume that we are working with the equation above and that both X and Z are centered, and that the XZ term is the product of centered X times centered Z. What are two interpretations of the b1 coefficient, the b2 coefficient.

B1 is the the change in y for a 1-unit change in Xc at the mean of Z (which is 0 when Z is centered) B2 is the change in y for a 1-unit change in Zcentered at the mean of X (which is 0 when X is centered) I THINK the second interpretation is as follows: B1 is the average effect of X and b2 is the average effect of z.

40. Suppose I have the regression equation = b0 + b1X + b2C + b3XC. In this regression equation, C is gender, X is height, Y is weight. Based on this regression model, how do I test the difference in weight for men and women who are 68 inches tall?

Center around 68 inches; set the 0 point to 68 inches and look at the coefficient which shows the differences between the intercepts of the two groups (with this regression equation it would be b2). When we center around 68 inches, interpreting the difference between intercepts will actually give us the difference in height between men and women (the 2 groups) at 68 inches, rather than at 0.

55. What does DFBETAS measure?

DFBETAS is the specific measure of influence. It it is the standardized change in the specified regression coefficient b1 calculated when case i is in the data set versus when case i is deleted from the data set. Standardization is done with case i dropped from the data set. This is how much it affects an individual regression coefficient. If magnitude of DFBETAS exceeds 1, the case is influential on the specified regression coefficient.

54. What does DFFITS measure?

DFFITS is the standardized change in the predicted score for case i with case i in the data set minus with case i deleted from the data set. Standardization is done with case i dropped from the data set. This is how much it affects the overall model. If the value of DFFITS exceeds 1 in magnitude, the case is influential.

62. What does the b2 coefficient tell you about the relationship of X to Y when X has been centered?

First off, the quadratic relationship between X and Y when X is the same regardless of whether X has been centered. That said, the b2 coefficient represents the rate of acceleration in the relationship between x and y, and accordingly tells us whether the curve represented by the polynomial is concave upward or downward. Recall that: If b2 > 0, :), concave upward If b2 < 0, :(, concave downward

44. What are deterministic outliers (a.k.a., contaminated observations)? What are some example sources? What are probabilistic outliers (a.k.a., rare cases)? What is an example source?

Deterministic: Problems in the data. (a) an error of execution in the experimental procedure, (b) inaccurate measurement of the dependent measure (c) errors in recording or keying in the data, (d) errors in calculation of the dependent measure (e) inclusion of inappropriate subjects in the sample Probabalistic: unusual data point, but it's real; example would be doing a height study but getting a "little person" (aka midget) in the data so you got a case from the low end of the tail. It's not an error, but it's an outlier.

53. What is the solution to the problem described in Q. 52? How is MSresidual(i) computed? I will refer to residuals divided by their deleted standard errors as "externally studentized".

Externally studentized residuals Y-Yhati(i)/ SEi(i) MSresidual(i) is computed by dropping the subject (case i) from the data set and then the standard error is computed from the residuals.

38. If you use contrast codes for two groups, and the contrast code interacts with the continuous variable in the equation, be able to indicate what each of the regression coefficients in the equation measures.

For the equation y-hat=b1X +b2C+b3XC+b0, ex. male coded .5, female coded -.5 (should add up to 0, but have a 1 unit difference between them) b0-intercept at the center of the data b1-regression of y on x when c=0 b2=difference between intercepts between 2 groups b3= difference between slopes of 2 groups

2. In any coding scheme for G groups, how many codes are required to characterize the G groups?

G-1 codes

48. Draw a sketch that shows a point with high leverage but low distance, with low leverage but high distance, with high influence.

GUIDE

43. What are the two types of measures of influence? What does each measure?

Global (measured by DFFITS): the influence of a single case on the overall regression equation. Standardized ŷi - ŷi(i)/s ŷi(i) SPecific (measured by DFBETAS): The influence of a single case on a single regression coefficient (of particular interest). Standardized (bi-bi(i))/sebi(i)

26. Be able to take a 2 x 3 ANOVA (two levels of factor A and three levels of factor B) and show the design matrix. Test the hypotheses that the mean of B1 does not differ from the mean of (B2 and B3) and that the means of B2 and B3 do not differ.acx

HANDOUT

28. Depict the relationship among the variables in ANCOVA using Venn diagrams and path models in a true experiment versus in a quasi-experiment. Why is there no correlation expected between the covariate and the treatment in the true experiment? With what two things can the covariate be related in a quasi-experiment with nonrandom assignment?

HANDOUT

60C. What do we mean by "higher" order terms in regression analysis? Give an example of a regression equation containing a higher order curvilinear term, a higher order interaction term.

Higher order terms are terms with higher exponents that are nonlinear transformations of the original x (like Xsquared, XZ, Xcubed) So going by this definition, it sounds like higher order terms are basically terms that are the results of multiplying individual predictors. For example, in the case of the higher order curvilinear term... yhat= b0 + b1x1 + b2x^2 b2x^2 is the highest order term, and is the result of multiplying the predictor x1 by itself. In the case of the higher order interaction term... yhat= b0 + b1x + b2z + b3xz b3xz is the highest order term, and is the result of multiplying the predictor x with the predictor z.

46. Explain distance. Is a specific regression model required to measure distance? Does high distance necessarily mean that a point is affecting the regression outcome?

How far the observed Y is from the predicted Y. Extent to which the points have high or low residuals. " Distance measures depend on the specific regression model being estimated. High distance has the potential to move the regression plane, but does not necessarily do so. If its at the mean, it can be a far distance and not pull line. best measure is the externally studentized residual which is: (y-yhat(i))/ sei(i)

47. Explain influence. Is a specific regression model required to measure influence? How does influence relate to leverage and distance. Does high influence necessarily mean that the point is affecting the regression outcome?

How much one data point or case distorts the regression surface, how much it moves the regression plane. To what extent does the single data point change the outcome of the regression analysis? Influence depends on the specific regression model being estimated so a specific regression model is required. Different regressions produce different values of influence. Influence is a function of leverage and distance. Global influence assesses the influence of the single case on the overall regression equation. Specific influence assesses the influence of the single case on a single regression coefficient (e.g. one of particular theoretical interest).

34. If the within class regression slopes differ as a function of the categorical variable in ANCOVA, what does this tell you about the slopes of the within class regression lines in the groups of the ANCOVA? How is this tested? What is the difficulty in coming up with an estimate of the treatment effect in ANCOVA if the within class regression slopes are not parallel?

If the slopes of the within class regression lines differ as a function of the categorical variable, it means that there is an interaction. You can test this in a regression analysis by testing for a significant interaction between the treatment and the covariate or by potentially using the Johnson Neiman test. The difficulty that arises is that if there IS an interaction, there are different treatment effects at different levels of the covariate.

52. What is the problem with simply dividing a residual by its standard error to compute a standardized residual? I will refer to residuals simply divided by their standard errors as "standardized", following Belsley, Kuh, and Welsch.

If there are outliers in the data, it will draw the regression line toward it. The mean square residual have the value of the outlier residual in it's computation so they are heavily inter-connected. The standard error is also influenced.

73. Explain how you would use the rearranged equation (Q. 71) to generate three simple regression equations, one at ZH (one standard deviation above the mean of Z), one at ZM (at the mean of Z), and one at ZL (one standard deviation below the mean of Z). Be able to do this if given a numerical example. Be able to take SAS or SPSS printout and reproduce the three simple regression equations. Be able to plot simple regression equations.

In terms of the equations... Yhat= (b1+b3z)x+ b0+b2z b1+b3z is the simple slope. ZH: Zcentered + 1 standard deviation of Z= 0 + standard deviation of Z ZM: It's 0 when Z is centered ZL: Z centered - 1 standard deviation of Z= 0 - standard deviation of Z So you just plug the ZH, ZM, and ZL values into the yhat equation and get 3 equations...

66. What do interactions between predictors signify in multiple regression analysis? Give an algebraic explanation of how the regression of Y on X is affected if there is an XZ interaction. What does the geometry of the surface look like? = b1 X + b2 Z + b3 X Z + b0

Interaction between predictors means there is an interplay among predictors that produce an effect on y that is different from the sum of effects of individual predictors on y. This means the regression of y on x changes as z is changed. Yhat= b1 X + b2 Z + b3 X Z + b0 In multiple regression analysis, interactions between predictors mean that: -Xslope and intercept depend on Z: Yhat=(b1+b3Z)X+(b0+b2Z) -And Z slope and intercept depend on X:Yhat=(b2+b3X)Z+(b0+b1X) Rather than having a flat surface like we would with 2 predictors, the surface changes differentially on x depending on values of z. So we have a plane that is not flat.

37. What does the Johnson-Neyman procedure test?

It tests at what regions of X there is a statistically significant difference between the conditions. aka--identifies regions of X where the conditions differ significantly.

39. In the regression equation = b0 + b1X + b2C + b3XC, if you change the coding of the categorical variable (C) with two levels from dummy to contrast coding, will the regression coefficient for the interaction change or remain the same?

It will remain the same.

45. Explain leverage. Is a specific regression model required to measure leverage? Does high leverage necessarily mean that a point is affecting the regression outcome?

Leverage--how far the point is from the center of the cases (centroid). A specific regression model is not required because leverage is dependent on the mean of the predictor, not an overall model. High leverage does not necessarily mean that a point is affecting the regression outcome. If the point is right on the regression line, even with high leverage it won't be pulling the line very much. .2-.5 is moderate leverage, .5 is high leverage.

50. What measures are on the main diagonal of the hat matrix?

Measures of leverage

61. Will the b2 coefficient in the equation in question 60 change if the variables are centered or not? Will the b1 coefficient change? Will the b0 coefficient change?

No b2 will remain the same. This is because b2 is a second order coefficient,which in this equation is the coefficient for the highest order polynomial, which remains the same regardless of whether or not the raw scores are centered. Yes, b1 coefficient will be different because b1 coefficient depends upon the scaling of X. Yes, b0 will also change because it reflects the scaling of the predictor versus the criterion. The intercept is evaluated when X=0. When the X predictor is change by centering, b0 must change.

6. Are dummy codes centered?

No!

7. Are the pairs of dummy codes in a dummy variable coding scheme orthogonal?

No. Pairs of dummy codes in a variable coding scheme are NOT orthogonal. In other words, the dummy codes ARE CORRELATED with one another.

8. What sort of data configuration lends itself most naturally to coding with dummy codes? What are the criteria useful in choosing a base group?

One group is a CONTROL group and other groups are experimental groups to be compared with the control group. 3 criteria useful in choosing a base group. The reference group should: 1) be useful (e.g. control group, standard treatment); 2) NOT be a "wastebasket category" (e.g. "other"); and 3) NOT have a very small sample size relative to the other groups (the group should be stable).

57. I will give you computer output containing exactly the same regression diagnostics as on the computer printout from class example and will ask you to point out cases with high leverage, distance, influence and to indicate how regression coefficients are being influenced by individual points.

Outliers handout pgs. 2-4 For example look at Computer Example 5. High leverage can be as high as 1.0. Huber (1981) suggested that .2 to .5 may be considered to be moderate leverage, .5 is high leverage. In SPSS output look for labeled "Lever" and in SAS look for "Hat Diag H." High distance cutoffs of 2.5 or 3.0 are often used with sample sizes of 100 cases. Larger cutoffs should be used with larger datasets. In SPSS output look for label "SDRESID" or "studentized deleted (PRESS residual" and in SAS look for "Rstudent." High influence can be identified when the value of DEFITS exceeds 1 in magnitude. In SPSS look for label "SDFIT" and in SAS look for "Dffits." Also DFBETAS tells you about the standardized change in the specified regression coefficient bi.

67. Assume X and Z are centered. If the b3 coefficient is significant in the above equation, what does that tell you. Draw a sketch of an interaction in which the b3 coefficient is positive, is negative, and explain in words the condition under which the b3 coefficient will be positive or negative.

Positive b3:: SYNERGISTIC (Augmenting): The interaction term will be positive when a high value of Y is associated with BOTH a high value of X and a high value of Z, i.e. when the regression of Y on X is positive by itself, and in addition, a high value of Z makes the regression of Y on X even higher. When the 2 continuous predictors, X and Z are working in the same direction, synergistically, the interaction term is POSITIVE. Negative b3: BUFFERING: The interaction term will be negative when a high value of Y is associated with a high value of X but with a low value of Z, i.e. when a high value of Z make the regression of Y on X smaller. The interaction will also be negative if a high value of Y is associated with a low value of X but when a high value of Z raises the slope of the regression of Y on X. The interaction will be negative when the 2 predictors work in OPPOSITE directions.

56. What is the problem in regression diagnostics with clusters of errant points?

Regression Diagnostics do not identify clumps of outliers. Outlier statistics for clumps of outliers are not yet well developed. This is because Regression Diagnostics are case statistics. There will be a value of each diagnostic statistic for each subject in the data set Amanda said: Problem is when there are clustered errant data points, they mask eachother in diagnostic analyses.

25. A design matrix is a set of codes used in ANOVA. Consider a 2 x 2 factorial design. There are two levels of A (low, high) and two levels of B (low, high). Write the design matrix for the 2 x 2 design in ANOVA. Otherwise stated, what do C1, C2, and C3 (code variables) look like?

SEE HANDOUT

10. For unweighted effects coding, be able to take the general regression equation and the codes and explain what each of the coefficients in the equation is measuring.

SEE handout

19. Be able to set up contrast codes if given a specific set of a priori hypotheses.

SEE handout

36. If you use dummy coding for two groups, and the dummy code interacts with the continuous variable in the equation, be able to indicate what each of the regression coefficients in the equation measures. What two tests would you perform to see whether each within group regression line differs from 0?

Say the two groups are a treatment and control group and they are coded: C1 Treatment 1 Control 0 For a regression equation: y-hat= b1C1 +b2X+b3XC1+b0 B0= predicted value of Y in control group when X=0. B1= difference between treatment and reference group when X=0. B2= slope of the reference group B3= interaction, or the difference in slope between treatment and control group Two tests: b2 from the above coding gives you the slope of the reference group which can inform whether this regression line differs from 0. If you reverse the coding (treatment=0, control=1) then this b2 will give you the slope of the treatment group and this can tell you whether this regression line differs from 0.

3. DATA LAYOUT. I will give you the layout of a categorical variable with several levels, and a different number of cases per level. For example, I might give you four groups, and n1=2, n2=4, n3=5, and n4=3. Be able to create the dummy, unweighted effects, weighted effects, and contrast codes for the groups. (I will specify the base [reference] group or hypotheses).-

See study guide

72. What do we mean by simple (conditional) regression equations, simple (conditional) slopes?

Simple regression equations are regression equations for the regression of Y on Xcentered at values of Z: Y= (b1 +b3Zc)Xc + (b2Zc + b0) b1 +b3Zc is the simple slope-- it's conditional because the slope of y on x depends on values of z. Simple slopes and simple regression equations are equations that reflect the regression of Y on X at a particular value of X (in a curvilinear regression) or at a particular value of another predictor with which X interacts.

21. What is the relationship between the ANOVA summary table for dummy codes and nonsense codes?

Since each coding system as a set carries all the group information and represents the same nominal variable, then given the same y data, they will yield the same R-squared multiple and hence the same F statistic. In overall prediction, coding scheme doesn't matter.

79. What is meant by the graphical term "slicing?"

Slicing refers to when you cut the data into three parts and look at the relationship of one variable on another at different levels of a third. For example - if we have the equation y b1x + b2Z + b3XZ +b0, we may want to look at the relationship of Y on X at low levels of Z, mid levels of Z and high levels

14. If you have unequal sample sizes in the groups in a data set, describe the two grand means that can be computed. Then explain how the unweighted effects codes versus the weighted effects codes measure discrepancies between the group means and these two different grand means. When are unweighted and weighted effects codes the same?

So if you're using unweighted effect codes, the Grand Mean is the unweighted mean of means of all the Ybars/# of types of groups (so like Ybargrouptherapy + ybarindividualtherapy + ybarcontrol/3). Then, if you're using weighed effect codes, you can get a weighted mean (though this isn't typically called the Grand Mean, but I think that's what he's referring to), which is a mean of the groups (taking mean of each group and weighing it based on the number of cases in each group).

41. What is on the X-axis and the Y-axis in an added variable plot? What does the plot potentially tell you about beyond the corresponding partial regression coefficient?

The AVP shows the pure relationship between the part of the predictor that is independent of all the other predictors and the part of the criterion that is independent of all other predictors. Tells us about shape and linearity of relationship. Plot of residuals (centered).

65. Why is the added variable plot (AVP) useful in detecting curvilinear relationships in regression equations with several predictors?

The added variable plot (AVP) is useful because it allows us to actually see what's going on in the data. Since AVP allows us to plot each individual relationship, we can use these graphs to check for curvilinearity or the shape of our the data.

60A. What is the general form of a polynomial equation with one predictor?

The general form: yhat= b0 + b1x+b2x^2+b3x^3...bpx^p This can look like: A quadratic equation: Yhat=b0 + b1x +b2x^2 A cubic equation: Yhat=b0 +b1x+ b2x^2+b3x^3

49. What is the general strategy for studying the effect of a point on the regression outcome (delete the point and rerun analysis).

The general strategy is to delete the point and rerun analysis to see if there is a difference when the point is there compared to when it is not.

76. Suppose you analyze a data set with the equation Y = b1X + b2Z + b3 XZ + b0, and you use centered predictors. You then repeat the regression analysis with uncentered predictors. Which coefficients will change across analyses and why?

The highest order term b3 (the regression coefficient for the XZ interaction) is constant in raw and centered (identical in each analysis), whereas lower order terms b0 b1 b2 change when moving from raw to centered. The lower order terms change because they are evaluated at particular values of the predictors, particularly when the other predictors= 0. (For example, b1 is the slope of x when z=0. When we center our raw scores, b1 becomes the slope of y on x when z-centered=0. Since z-centered is equal to 0 when z=mean of z, b1 can also be interpreted as the average effect of X on Y.) In the centered solution, the slopes are intrepreted as the average effect of X and the average effect of Z .

22. Suppose you have two groups that you have dummy coded as follows: C1 C2 T1 1 0 T2 0 1 C 0 0 How would you compare the means of T1 and T2 from the regression output?

The mean of T1 would important in looking at b1. b1 would be the difference in the mean of T1 and the mean of control group C. So: T1bar- Cbar. Therefore, T1= b1+Cbar The mean of T2 would be important in looking at b2. b2 would be the difference in the mean of T2 and the mean of control group C. So: T2bar-Cbar. Therefore, T2 = b2+Cbar b0 would be the mean of the control group, C.

51. What is the centroid? What are three measures of distance relative to the centroid?

The mean of all X variables. The center of the data on all dimensions. 1) Euclidian distance--way that "superman goes" 2) City block distance--way you would walk 3) Probability distance--hat matrix diagonals, Mahalanbois distance

74. What are two strategies for picking values of Z at which to examine the simple (conditional) regression of Y on X in a regression equation with an interaction?

The preferred strategy is to pick meaningful values- so like if we're working with the BDI or IQ, pick values that make sense (like 100 for IQ- because it's the average or cutoff points for BDI). Using meaningful values makes it easier to compare across samples. The second strategy is to choose values at the mean, 1 standard deviation above the mean, and 1 standard deviation below the mean.

29. What functions does the covariate serve in the analysis of quasi-experiments? Why is ANCOVA conducted in randomized experiments?

The purpose of the covariance in the system is to partial out from the treatment variable variation in outcome that is attributable to differential subject characteristics across treatments (differential selection). Further, it is to partial out of the criterion any differences between the groups that existed before the experiment (existing differences). Even tough in a randomizxed experiment there shouldn't be differential selection or differences, sometimes there are. In ANCOVA we take into account covariates- in a randomized experiment the covartiate partials out from the criterion a source of variation that is irrelevant to the treatment. To the extent that the covariate removes from the criterion irrelevant variance from the perspective treatment, the statistical power of the ANCOVA will be greater than that of the corresponding ANOVA.

69. In the regresssion equation = b1 X + b2 Z + b3 X Z + b0 we say that the regressions of Y on X and Y on Z are conditional relationships. What do we mean by this?

The regressions of Y on X and Y on Z are conditional because the simple slopes for these relationships show that the slope of Y on X is dependent or conditional on the value of Z, and the slope of Y on Z is dependent/conditional on the value of X. (This is like the zhigh, zmoderate, zlow stuff we did for homework) For example, the equation above can be rearranged to read: Yhat= (b1+b3z)x+ b0+b2z OR yhat= (b2+b3X)z + bo + b1X In this simple equation for the regression of Y on X (the first one above), the simple slope= b1+b3z. Since the slope contains the predictor z, we can say that the slope of Y on X is CONDITIONAL on the values of Z. The second equation tells us that the relationship between y and z is conditional on values of X.

59. What is the relationship between leverage (SAS hat matrix diagonal) and centered leverage (SPSS LEVER)?

There are slight discrepancies in the SPSS and SAS outputs. SPSS dept. The sum of the hiis equals the number of terms [predictors] in the regression equation (p for SPSS; p +1 terms for SAS). SPSS now refers to its index as &quot;centered leverage.&quot; The minimumvalue of centered leverage is 0 and the maximum is 1 - 1/n. Amanda said: SAS includes the intercept in leverage, so sum of all leverage values is p + 1. SPSS uses centered leverage, does not include intercept, so sum= p.

13. Are unweighted effects codes orthogonal (uncorrelated)?

They are not orthogonal

80. How do we build a linear by quadratic interaction into a regression analysis? How many degrees of freedom does such an interaction have in regression analysis.

To build a quad. by linear interaction, we would use the equation Yhat = b1X + b2Xsquared + b3Z +b4XZ + b5 XsquaredZ + b0. The whole interaction is represented by both teh XZ and Xsquared-z terms so it has 2 df of freedom.

32. What are adjusted means? How are the adjusted (conditional) means estimated using multiple regression.

To calculate for the adjusted mean in an equation with quantitative ICs (eg covariates) and nominal variables you have to use a regression equation in which all quantitative variables have been centered by subtracting their respective means.

60B. How can a quadratic regression equation be used to test a prediction of a curvilinear relationship of a predictor to the criterion.

Using the polynomial equations (linear & quadratic), test the gain in prediction added by the second order predictor in the 2nd (quadratic) equation. If it adds significant predictability to the first (linear) equation, as determined by doing an F-gain test, then we have support in favor of the hypothesis that we have a curvilinear relationship between X and Y. yhat= b0 + b1x1 (linear) yhat= b0 + b1x1 + b2x^2 (quadratic)

12. Are unweighted effects codes centered for equal group size? for unequal group size?

Yes, they are centered for equal group size, but not for unequal group size.

78. Explain what is meant by a "linear by linear" interaction. How many degrees of freedom does such an interaction have in Regression Analysis.

Yhat = B1X+ B2Z + B3XZ +B0 The above equation is a linear-by-linear interaction, meaning that the regression of Y on X is linear at every value of Z or, equivalently, that the regression coefficient of Y on X changes at a constant rate a a function of changes in Z. from page 279 CCWA The DF are one for each term

71. Rearrange the equation contained an interaction (above) into a simple regression equation showing the regression of Y on X at values of Z. Explain how the regression coefficient in the rearranged equation shows that the regression of Y on X depends on the value of Z.

Yhat= (b1+b3z)x+ b0+b2z So b1 + b3z is the simple slope of the regression of Y on Xc. Given that z is part of this slope calculation, changes in the value of z clearly affect slope (the change of y for a 1 unit change in x)-- so clearly the regression of Y on X depends on the value of Z.

33. Given an ANOVA model with 3 treatment groups and 1 covariate, how is the overall treatment effect tested in multiple regression? What is the df for the overall treatment effect? What is the df for the covariate?

Yhat= b0+b1COV+b2C1+b3C2 Where COV is the covariate, and C1 and C2 are the code variables for the 3 treatment groups (G-1 codes). We test for the overall treatment effect by running a gain in prediction test with the treatment variables. So, first we run: Yhat= b0+b1COV Then we run: Yhat= b0+b1COV+b2C1+3C2 And use the F-gain test. Df numerator: 2 (the added coefficients) Df denominator: n-p-1= n-3-1 Df for the covariate: 1 (1 df per individual coefficient)

18. Do you get different numbers in the analysis of regression summary table if you use weighted effects codes versus unweighted effects codes to code a categorical variable with unequal group size?

You DO get different numbers in the analysis of regression summary table if you have UNEQUAL group size. However, if you have EQUAL group size, you will get the same numbers (weighted effect and unweighted effects give us the same numbers in analysis of regression summary table if you have equal group size).

63. Interpret the b1 coefficient in two ways, assuming X has been centered before analysis.

b1=the slope of a tangent (straight) line to the curve at X-Xbar=0, in other words when X=Xbar. Within this regression model, since x has been centered, b1 also represents the average regression of Y on X.

64. Why must all lower order terms be included in a regression equation when the prediction from a higher order term is being examined for significance?

equations with higher order terms must include lower order terms because effects of higher order terms are only pure when all lower order terms are partialled out-- so to test contribution of higher order terms, must consider the prediction of these terms over and above the lower order terms. (CMC added) The coefficient for the highest order term is only an accurate reflection of curvilinearity at the highest level if all lower levels are partialled out. In other words, we don't want to take into account the linear components of the data--we only want to measure "pure, unadulterated" cuvilinearity.

42. What are the three characterizations of errant data points?

leverage, distance, influence.

75. I will give you the equation for the standard error of a simple slope for the regression of Y on X at values of Z: sebj = (s11 + 2 Z s13 + Z2 s33)½Explain what terms from Sb (the covariance matrix of the regression coefficients) go into this expression for the standard error. To what does Z refer to in this expression. Be able to compute this standard error if given the matrix Sb and a particular numeric value of Z (say, e.g. ZH = 20, ZL = 10). Also be able to compute the t-test for the simple slope and know the degrees of freedom for the test. Do all regressions of Y on X (all simple slopes) have the same value of the standard error, regardless of the value of Z?

standard error of simple slope must take into account the variance of b1 (s11), the variance of b3 (s33), and the covariance between b1 and b3 (s13). You get all of these from covariance matrix. Formula also includes z, which refers to particular value of z at which simple slope is measured. What we're really looking at is Sb= Sb(y on x at z= something) which equals the equation above. Then t is the simple slope divided by sby on x at z. sebj (or sbyx z) is the standard error of the simple slope of the regression of Y on X at a specified value Z. From the covariance matrix of the regression coefficients, s11 is the variance of b1. s33 is the variance of b3. s13 is the covariance of b1 and b3. Z is the numeric value that is chosen. To compute the t-test for the simple slope, plug in the "b" coefficient in the numerator (byx z) and the standard error in the denominator (sbyx z). So t = byx z / sbyx z . The df = n-p-1. All simple slopes do NOT have the same value of the standard error, because the value of Z changes the value of the standard error.

27. Consider a 2 x 3 ANOVA with 2 levels of A (low, high) and 3 levels of B (low, moderate, high). The design matrix uses five code variables as predictors of the dependent variable. A. Using a series of regression equations, explain how you would find SSA, SSB and SSAB. B. What are the degrees of freedom for each effect?

yhat=b0+b1C1=b2C2+b3C3+b4C4+b5C5 handout

35. Write a regression equation for a categorical predictor, a continuous predictor, and their interaction. Rearrange this equation into the simple regression equation for the regression of Y on X at values of the categorical variable C.

ŷ = b1X + b2C + b3XC + b0 ŷ = (b1 + b3C)X + (b2C + b0 ) (b1 + b3C) is the simple slope for the regression of y on X dependent on the value of C (b2C + b0 ) is the simple intercept dependent on the value of C.


Kaugnay na mga set ng pag-aaral

Economics 202 exam 1 UKY Dellachiesa

View Set

2400 practice questions and clickers

View Set

Matthew Gaines Accounting Terminology

View Set

Christ-Centered Preaching by Bryan Chapell Q&A

View Set

Plant Evolution II - Biology II Assignment

View Set

Current leaders of India, Mexico, Nigeria, Iran

View Set

Maternal newborn online practice

View Set

world geography prentice hall chapters 7 and 8

View Set

Social Hierarchy in Latin America

View Set