ACA Chapter 9 pt 2
R2 example
- The between- and within-groups degrees of freedom are included in the formula to take into account the number of participants and the number of groups used in the study. - Basically the numerator is a version of the between-groups variance estimate, and the denominator is a version of the total variance (between plus within). Consider once again the criminal record study. - S2 Between = 21.70, dfBetween = 2, S2 Within = 5.33, and dfWithin = 12. Thus, the proportion of the total variation accounted for by the variation between groups is (21.70)(2)/[(21.70)(2) + (5.33)(12)], which is .40 (or 40%). In terms of the formula, look at your phone.
For this reason, statisticians have developed a variety of procedures to use in these fishing expeditions. These procedures attempt to keep the overall risk of a
Type I error at some level like .05, while at the same time not too drastically reducing statistical power.
As a post hoc test, the Scheffé method has the advantage of being the most widely applicable method. why?
We say that because it is the only one that can be used when you are making relatively simple comparisons (such as the ones we have considered in which two groups are being compared), as well as when you are making more complex comparisons (for example, comparing the average of two groups to a third group).
in practice, in most research situations involving more than two groups, our real interest is not in an overall, or omnibus, difference among the several groups, but rather in
more specific comparisons. - For example, in the criminal record study, the researchers' prediction in advance would probably have been that the Criminal Record group would rate the defendant's guilt higher than both the No Information group and the Clean Record group. - If, in fact, the researchers had made such predictions, these predictions would be examples of what are called planned contrasts. (They are called "contrasts" because they contrast the results from specific groups.) -
With the t test, you took the difference between the two means and divided by the standard deviation. In the analysis of variance, you have
more than two means; so it is not obvious just what is the equivalent to the difference between the means—the numerator in figuring effect size.
if you make three planned contrasts at the .05 level, there is about
a .15 chance.
however, with multiple contrasts, if you use the .05 cutoff, you can actually have... why?
much more than a .05 chance of getting a significant result if the null hypothesis is true! - The reason is this: if you are making several contrasts (comparisons), each at the .05 level, the chance of any one of them coming out significant is more than .05. (It is like flipping coins If you flip any one coin, it has only a 50% chance of coming up heads. But if you flip five coins, there is a lot better than 50% chance at least one of them will come up heads.)
post hoc comparisons definition
multiple comparisons, not specified in advance; procedure conducted as part of an exploratory analysis after an analysis of variance.
bonferroni procedure definition
multiple-comparison procedure in which the total alpha percentage is divided among the set of comparisons so that each is tested at a more stringent significance level.
analysis of variance in research articles Researchers often report results of post hoc comparisons among
all pairs of means. - The most common method of doing this is by putting small letters by the means in the tables. - Usually, means with the same letter are not significantly different from each other; those with different letters are.
the method you learned earlier in the chapter emphasizes entire groups, comparing a variance based on differences among group means to a variance based on
averaging variances of the groups.
What if the between-groups and within-groups variance estimates are not available, as is often true in published studies? It is also possible to figure R2 directly from
F and the degrees of freedom. formula: look at phone - The proportion of variance accounted for is the F ratio multiplied by the between-groups degrees of freedom (the degrees of freedom for the between-groups population variance estimate), divided by the sum of the F ratio multiplied by the between-groups degrees of freedom, plus the degrees of freedom for the within-groups population variance estimate.
Planning Sample Size For example, suppose you are planning a study involving four groups and you expect a small effect size (and will use the .05 significance level).
For 80% power, you would need 274 participants in each group, a total of 1,096 in all. However, suppose you could adjust the research plan so that it was now reasonable to predict a large effect size (perhaps by using more accurate measures and a stronger experimental procedure). Now you would need only 18 in each of the four groups, for a total of 72.
Sometimes, however, researchers take a more exploratory approach, for example, comparing all the different pairings of means to discover which ones do and do not differ significantly. (We call this making
pairwise comparisons, because you are comparing all possible pairings of means.) - That is, after the study is done, the researcher is fishing through the results to see which groups differ from each other - These are called post hoc comparisons (or a posteriori comparisons) because they are after the fact and not planned in advance.
as you learned in the preceding section on planned contrasts, researchers often
plan specific comparisons based on theory or practical considerations.
analysis of variance in research articles Note that it is also common for researchers to report
planned contrasts using t tests. - These are not ordinary t tests for independent means, but rather special t tests for the comparisons that are mathematically equivalent to the method we described— that is, the results in terms of significance are identical
proportion of variance accounted for (R2) definition
proportion of the total variation of scores from the grand mean that is accounted for by the variation between the means of the groups.
the group's mean's deviation from the grand mean is the basis for the
between-groups population variance estimate.
the between-groups population variance estimate, however, in a planned contrast is different from the between-groups variance estimate in the overall analysis. why?
it is different because in a planned contrast you are interested in the variation only between a particular pair of means - Specifically, in a planned contrast between two group means, you figure the between-groups population variance estimate with the usual two-step procedure, but using just the two means of interest - Once you have the two variance estimates for the planned contrast, you figure the F in the usual way, and compare it to a cutoff from the F table based on the df that go into the two estimates, which are the same as the overall analysis for dfWithin and are usually exactly 1 for dfBetween (because the between estimate is based on two means, and 2 - 1 = 1).
the Scheffé test definition
method of figuring the significance of post hoc comparisons that takes into account all possible comparisons that could be made
to use the Scheffé test, you first
figure the F for your comparison in the usual way. - But then you divide that F by the overall study's dfBetween (the number of groups minus 1). - You then compare this much smaller F to the overall study's F cutoff.
the structural model method emphasizes
individual scores. - it compares a variance based on deviations of individual scores' groups' means from the grand mean to a variance based on deviations of individual scores from their group's mean
the proportion of variance accounted for is a useful measure of effect size because
it has the direct meaning suggested by its name - researchers are familiar with R2 from its use in regression [see Chapter 12] and its square root, R, is a kind of correlation coefficient that is very familiar to most researchers
the methods we have just described for figuring the within-groups and between-groups population variance estimates using the structural model approach give exactly the same result as the
methods you learned earlier in the chapter
R2 has a
minimum of 0 and a maximum of 1 - However, in practice it is rare in most psychology research for an analysis of variance to have an R2 even as high as .20.
Controversy: Omnibus Tests versus Planned Contrasts The analysis of variance is commonly used in situations comparing three or more groups. (If you are comparing two groups, you can use a t test.) However, following the logic we introduced earlier, Rosnow and Rosenthal (1989) argue that
such diffuse or omnibus tests are not very useful. They say that, in almost all cases when we test the overall difference among three or more groups, "we have tested a question in which we almost surely are not interested" (p. 1281). In which questions are we interested? We are interested in specific comparisons, such as between two particular groups
structural model: SSwithin
sum of squared deviations of each score from its group's mean.
structural model: SStotal
sum of squared deviations of each score from the overall mean of all scores, completely ignoring the group a score is in
structural model: SSbetween
sum of squared deviations of each score's group's mean from the grand mean.
in post hoc comparisons, all possible comparisons have to be
taken into account when figuring the overall chance of any one of them turning out significant. - t. Using the Bonferroni procedure for post hoc comparisons is safe, in the sense that you are confident you won't get too many results significant by chance. - But in post hoc comparisons there are often so many comparisons to consider that the overall significance level is divided into such a small number by the Bonferroni procedure that getting any one comparison to come out significant would be a long shot. - For example, with four groups, there are six possible pairs to compare; so using a Bonferroni correction and an overall significance level of .05, you would have to test each comparison at .05/6 or .0083. If there are five groups, there are 10 possible comparisons; .05 overall becomes .005 for each comparison. And so forth. - Thus, the power for any one comparison becomes very low -
common name for this measure of effect size (besides R2) is η2
the Greek letter eta squared; η2 is also known as the correlation ratio
you may see some of these referred to in articles you read, described by the names of their developers;
the Scheffé test and Tukey test are the most widely used, with the Neuman-Keuls and Duncan procedures almost as common. - Which procedure is best under various conditions remains a topic of dispute.
the general principle is that the Bonferroni corrected cutoff you use is
the true significance level you want divided by the number of planned contrasts. - Thus, if you want to test your hypothesis at the .01 level and you will make three planned contrasts, you would test each planned contrast using the .0033 significance level. That is, .01/3 = .0033.
R2 is the proportion of the
total variation of scores from the grand mean that is accounted for by the variation between the means of the groups. - (In other words, you consider how much of the variance in the measured variable— such as ratings of guilt—is accounted for by the variable that divides up the groups— such as what experimental condition one is in.)
in the structural model method, when figuring the within-groups variance estimate method, you never actually figure the
variance estimate for each group and average them. - similarly, for the between-groups estimate, with the structural model method, you never multiply anything by the number of scores in each sample. - the point is that, with either method, you get the same within-groups and between-groups variance estimates, and thus the same F and the same overall result.
the structural model method focuses directly on
what contributes to the divisions of the deviations of scores from the grand mean.
the method earlier in the chapter focuses directly on
what contributes to the overall population variance estimates
Controversy: Omnibus Tests versus Planned Contrasts Rosnow and Rosenthal (1989; see also Furr & Rosenthal, 2003) advocate that,
when figuring an analysis of variance, you should analyze only planned contrasts. These should replace entirely the overall F test (that is, the diffuse or omnibus F test) for whether you can reject the hypothesis of no difference among population means. Traditionally, planned contrasts, when used at all, are a supplement to the overall F test. So this has been a rather revolutionary idea.
as we have noted, rejecting the null hypothesis in an analysis of variance implies that the population means are not all the same, but it does not tell you
which population means differ from which.
analysis of variance in research articles example
Returning again to the criminal record study example, we could describe the analysis of variance results this way: "The means for the Criminal Record, Clean Record, and No Information groups were 8.0, 4.0, and 5.0, respectively. These were significantly different, F(2, 12) = 4.07, p < .05. We also carried out two planned contrasts: The Criminal Record versus the No Information condition, F(1, 12)= 4.22, p < .10; and the Criminal Record versus the Clean Record condition, F(1, 12) = 7.50, p < .05. Although the first contrast approached significance, after a Bonferroni correction (for two planned contrasts), it does not even reach the .10 level."
Of course, you might think, "I'll just test the pairs of means that have the biggest difference so that the number of comparisons won't be so great." Unfortunately, this strategy won't work. why?
Since you did not decide in advance which pairs of means would be compared, when exploring after the fact, you have to take into account that any of the pairs might have been the biggest ones. - So unless you made specific predictions in advance—and had a sound theoretical or practical basis for those predictions—all the possible pairings have to be counted.
Controversy: Omnibus Tests versus Planned Contrasts main concern of solely using planned contrasts
The main concern is much like the issue we considered in Chapter 4 regarding one-tailed and two-tailed tests. - If we adopt the highly targeted, planned contrasts recommended by Rosnow and Rosenthal, critics argue, we lose out on finding unexpected differences not initially planned, and we put too much control of what is found in the hands of the researcher (versus nature).
R2 formula
The proportion of variance accounted for is the between-groups population variance estimate multiplied by the between-groups degrees of freedom, divided by the sum of the between-groups population variance estimate multiplied by the between-groups degrees of freedom, plus the within-groups population variance estimate multiplied by the within-groups degrees of freedom. - just look at picture on phone
planned contrasts: when you reject the null hypothesis in an analysis of variance, this implies that the population means are not all the same. what is not clear, however, is
which population means differ from which. - For example, in the criminal record study, the Criminal Record group jurors had the highest ratings for the defendant's guilt (M = 8); the No Information group jurors, the second highest (M = 5); and the Clean Record group jurors, the lowest (M = 4). - From the analysis of variance results, we concluded that the true means of the three populations these groups represent are not all the same. (That is, the overall analysis of variance was significant.) - However, we do not know which populations' means are significantly different from each other.
the score's deviations from its group's mean is the basis for the...
within-groups population variance estimate
As we have noted in previous chapters, determining power is especially useful when interpreting the practical implication of a nonsignificant result. For example, suppose that
you have read a study using an analysis of variance with four groups of 30 participants each, and there is a nonsignificant result at the .05 level. Table 9-9 shows a power of only .13 for a small effect size. - This suggests that even if such a small effect exists in the population, this study would be very unlikely to have come out significant. But the table shows a power of .96 for a large effect size. - This suggests that if a large effect existed in the population, it almost surely would have shown up in that study.
there is a problem when you carry out several planned contrasts. Normally, when you set the .05 significance level, this means
you have selected a cutoff so extreme that you have only a .05 chance of getting a significant result if the null hypothesis is true.
bonferroni procedure (Dunn's test)
- A widely used approach for dealing with this problem with planned contrasts - The idea of the Bonferroni procedure is that you use a more stringent significance level for each contrast. The result is that the overall chance of any one of the contrasts being mistakenly significant is still reasonably low. - For example, if each of two planned contrasts used the .025 significance level, the overall chance of any one of them being mistakenly significant would still be less than .05. (That is, .05/2 = .025.) With three planned contrasts, you could use the .017 level 1.05/3 = .0172.
analysis of variance in research articles
- Analyses of variance (of the kind we have considered in this chapter) are usually described in a research article by giving the F, the degrees of freedom, and the significance level. For example, "F(3, 68) = 5.21, p < .01." - The means for the groups usually are given in a table, although if there are only a few groups and only one or a few measures, the means may be given in the regular text of the article. Usually, there is some report of additional analyses, such as planned contrasts.
an example of a planned contrast
- Consider the planned contrast of the Criminal Record group (M = 8) to the No Information group (M = 5). - The within-groups population variance estimate for a planned contrast is always the same as the within-groups estimate from the overall analysis: In the criminal record example S2 Within was 5.33. steps 1. Estimate the variance of the distribution of means: Add up the sample means' squared deviations from the grand mean and divide by the number of means minus 1. The grand mean for these two means would be 6.5 [that is, (8 + 5)/2 = 6.5] and dfBetween when there are two means being compared is 2 - 1 = 1. Thus, S2M = 4.5. 2. Figure the estimated variance of the population of individual scores: Multiply the variance of the distribution of means by the number of scores in each group. There are five scores in each group in this study. Thus, S2Between = 22.5. - Thus, for this planned contrast, F = S2 Between/S2 Within = 22.5/5.33 = 4.22. The .05 cutoff F for df = 1, 12 is 4.75. Thus, the planned contrast is not significant. You can conclude that the three means differ overall (from the original analysis of variance, which was significant), but you cannot conclude specifically that the Criminal Record condition makes a person rate guilt differently from being in the No Information condition.
analysis of variance figuring
- First, you figure the three sums of squared deviations (SSTotal, SSWithin, and SSBetween). - The next step is to check for accuracy. You do this following the principle that the sum of squared deviations of each score from the grand mean comes out to the total of the other two kinds of sums of squared deviations. - The degrees of freedom, the next step shown in the table, is figured the same way as you learned earlier in the chapter - Then, the table shows the figuring of the two crucial population variance estimates. - You figure them by dividing each sum of squared deviations by the appropriate degrees of freedom - Finally, the table shows the figuring of the F ratio in the usual way—dividing the between-groups variance estimate by the within-groups variance estimate. - All these results, degrees of freedom, variance estimates, and F come out exactly the same (within rounding error) as we figured earlier in the chapter.
the Scheffé method/test disadvantage
- Its disadvantage, however, is that, compared to the Tukey and other procedures, it is the most conservative. That is, for any given post hoc comparison, its chance of being significant using the Scheffé is usually still better than the Bonferroni, but worse than the Tukey or any of the other post hoc contrasts. -
principles of the structural model: from the sums of squared deviations to the population variance estimates
- Now we are ready to use these sums of squared deviations to figure the needed population variance estimates for an analysis of variance. - To do this, you divide each sum of squared deviations by an appropriate degrees of freedom. - The betweengroups population variance estimate (S2 Between or MSBetween) is the sum of squared deviations of each score's group's mean from the grand mean (SSBetween) divided by the degrees of freedom on which it is based (dfBetween, the number of groups minus 1 --> K - 1) - The between-groups population variance estimate is the sum of squared deviations of each score's group's mean from the grand mean divided by the degrees of freedom for the between-groups population variance estimate. - formula for S2Between = ∑(M - GM)^2 / dfbetween or MSbetween = SSbetween / dfbetween - The within-groups population variance estimate (S2 Within or MSWithin) is the sum of squared deviations of each score from its group's mean (SSWithin2 divided by the total degrees of freedom on which this is based (dfWithin; the sum of the degrees of freedom over all the groups—the number of scores in the first group minus 1, plus the number in the second group minus 1, etc.) - formula: ∑(M - GM)^2 / dfwithin orz MSwithin = SSwithin / dfbetween - Notice that we have ignored the sum of squared deviations of each score from the grand mean (SSTotal) - This sum of squares is useful mainly for checking our arithmetic. - SSTotal = SSWithin + SSBetween. - The within-groups population variance estimate is the sum of squared deviations of each score from its group's mean divided by the degrees of freedom for the within-groups population variance estimate.
principles of the structural model
- dividing up the deviations - summing the squared deviations - from the sums of squared deviations to the population variance estimates
the assumptions for the analysis of variance are basically the same as for the t test for independent means
- the cutoff F ratio from the table (or the exact p level from the computer output) is strictly accurate only when the populations follow a normal curve and have equal variances - as with the t test, in practice the cutoffs are reasonably accurate even when your populations are moderately far from normal and have moderately different variances - As a general rule, if the variance estimate of the group with the largest estimate is no more than four or five times that of the smallest and the sample sizes are equal, the conclusions using the F distribution should be adequately accurate - As with the t test for independent means, the type of analysis of variance you are learning about in this chapter assumes that all of the scores in the groups are independent from each other (that is, none of the scores within each group or across the groups are paired or matched in any way).
principles of the structural model: summing the squared deviations
- the next step in the structural model is to square each of these deviation scores and add up the squared deviations of each type for all the participants - this gives a sum of squared deviations for each type of deviation score - it turns out that the sum of squared deviations of each score from the grand mean is equal to (a) the sum of the squared deviations of each score from its group's mean plus (b) the sum of the squared deviations of each score's group's mean from the grand mean. - formula: the sum of squared deviations of each score from the grand mean is the sum of squared deviations of each score from its group's mean plus the sum of squared deviations of each score's group's mean from the grand mean. - formula: ∑(X - GM)^2 = ∑( X - M)^2 + ∑(M - GM)^2 or SStotal = SSwithin + SSbetween - SSTotal is the sum of squared deviations of each score from the grand mean, completely ignoring the group a score is in. - SSWithin is the sum of squared deviations of each score from its group's mean, added up for all participants - SSBetween is the sum of squared deviations of each score's group's mean from the grand mean—again, added up for all participants. - This rule applies only to the sums of the squared deviations. For each individual score, the deviations themselves, but not the squared deviations, always add up.
structural model
- the structural model provides a different and more flexible way of figuring the two population variance estimates - understanding the structural model provides deeper insights into the underlying logic of the analysis of variance, including helping you understand the way analysis of variance results are laid out in computer printouts. - the structural method more easily handles the situation in which the number of individuals in each group is not equal. - the structural model method is related to a fundamental mathematical approach to which we want to expose those of you who might be going on to more advanced statistics courses. - way of understanding the analysis of variance as a division of the deviation of each score from the overall mean into two parts: the variation in groups (its deviation from its group's mean) and the variation between groups (its group's mean's deviation from the overall mean); an alternative (but mathematically equivalent) way of understanding the analysis of variance.
however, if you are using tables, normally only the
.01 or .05 cutoffs would be available. - thus, even though almost all researchers use computers for their analyses, this situation has led to some traditions that are still followed today. = Specifically, for simplicity, when the Bonferroni corrected cutoff might be .017 or even .025, researchers often use the .01 significance level. Also, if there are only two planned contrasts (or even three), it is common for researchers not to correct at all.
cohen's conventions for R2 are
.01, a small effect size; .06, a medium effect size; and .14, a large effect size.
in fact, if you make two contrasts, each at the .05 significance level, there is about a
.10 chance that at least one will come out significant just by chance (that is, that at least one would come out significant even if the null hypothesis is true).
power Consider a planned study with five groups of 10 participants each and an expected large effect size (.14). Using the .05 significance level, the study would have a power of
0.56 Thus, even if the research hypothesis is in fact true and has a large effect size, there is only a little greater than even chance (56%) that the study will come out significant
steps for analysis of variance (when sample sizes are equal)
1. Restate the question as a research hypothesis and a null hypothesis about the populations. 2. Determine the characteristics of the comparison distribution. a. The comparison distribution is an F distribution. b. The between-groups (numerator) degrees of freedom is the number of groups minus 1 (df between = N groups - 1) c. The within-groups (denominator) degrees of freedom is the sum of the degrees of freedom in each group (the number in the group minus 1) (df within = df1 + df2 + ... + dflast) 3. Determine the cutoff sample score on the comparison distribution at which the null hypothesis should be rejected. a. Decide the significance level. b. Look up the appropriate cutoff in an F table, using the degrees of freedom from Step 2. 4. Determine your sample's score on the comparison distribution. This will be an F ratio. a. Figure the between-groups population variance estimate (S^2between or MS Between) Figure the means of each group. A. Estimate the variance of the distribution of means: S^2 M = ∑(M - GM)^2 / dfbetween B. Figure the estimated variance of the population of individual scores: S^2Between or MSBetween = (S^2M)(n) b. Figure the within-groups population variance estimate (S^2Within or MSWithin) A. Figure population variance estimates based on each group's scores: For each group, S^2 = ∑(X - M)^2 / (n - 1) = SS/df B. Average these variance estimates: S^2Within or MSWithin = (S21 + S22 + .... + S2last) / Ngroups c. Figure the F ratio: F = S^2Between / S^2Within or F = MSBetween / MSWithin 5. Decide whether to reject the null hypothesis: Compare the scores from Steps 3 and 4.
Controversy: Omnibus Tests versus Planned Contrasts consider an example.
Orbach and colleagues (1997) compared a group of suicidal mental hospital patients (individuals who had made serious suicide attempts), non-suicidal mental hospital patients with similar diagnoses, and a control group of volunteers from the community. - The purpose of the study was to test the theory that suicidal individuals have a higher tolerance for physical pain. The idea is that their higher pain threshold makes it easier for them to do the painful acts usually involved in suicide. - The researchers carried out standard pain threshold and other sensory tests and administered a variety of questionnaires to all three groups. Here is how they describe their analysis: (look at picture on phone) - The study by Orbach and colleagues study exemplifies Rosnow and Rosenthal's advice to use planned contrasts instead of an overall analysis of variance. But, although the idea was originally proposed more than two decades ago, this approach has not yet been widely adopted and is still controversial.
example of Scheffé test
Recall that for the comparison of the Criminal Record group versus the No Information group, we figured an F of 4.22. Since the overall dfBetween in that study was 2 (there were three groups), for a Scheffé test, you would actually consider the F for this contrast to be an F of only 4.22/2 = 2.11. You would then compare this Scheffé corrected F of 2.11 to the cutoff F for the overall between effect (in this example, the F for df = 2, 12), which was 3.89. Thus, the comparison is not significant using the Scheffé test.
second example of planned contrast
What about the other planned contrast of the Criminal Record Group 1M = 82 to the Clean Record group 1M = 42? For the between-groups population variance estimate, steps 1. Estimate the variance of the distribution of means: Add the sample means' squared deviations from the grand mean and divide by the number of means minus 1. The grand mean for these two means is 18 + 42>2 = 6.0 and dfBetween = 2 - 1 = 1. Thus, S2M = 8.0 2. Figure the estimated variance of the population of individual scores: Multiply the variance of the distribution of means by the number of scores in each group: S2Between = 40.0 - The within-groups estimate, again, is the same as we figured for the overall analysis—5.33. Thus, F = S2 Between>S2 Within = 40.0/5.33 = 7.50. This F of 7.50 is larger than 4.75 (the .05 cutoff F for df = 1, 12), which means that the planned contrast is significant. Thus, you can conclude that the Criminal Record condition makes a person rate guilt differently from the Clean Record condition.
analysis of variance table
chart showing the major elements in figuring an analysis of variance using the structural model approach. - lays out the results of an analysis of variance based on the structural model method. These kinds of charts are automatically produced by most analysis of variance computer programs - A standard analysis of variance table has five columns - The first column in a standard analysis of variance table is labeled "Source"; it lists the type of variance estimate or deviation score involved ("between" [groups], "within" [groups], and "total") - The next column is usually labeled "SS"(sum of squares); it lists the different types of sums of squared deviations. - The third column is "df" (the degrees of freedom of each type). - The fourth column is "MS" (mean square); this refers to mean squares, that is, MS is SS divided by df, the variance estimate. MS is, as usual, the same thing as S2. However, in an analysis of variance table the variance is almost always referred to as MS. - The last column is "F," the F ratio. (In a computer printout there may be additional columns, listing the exact p value and possibly effect size or confidence intervals.) - Each row of the table refers to one of the variance estimates. - The first row is for the between-groups variance estimate. It is usually listed under Source as "Between" or "Group," although you will sometimes see it called "Model" or "Treatment." - The second row is for the within-groups variance estimate, though it is sometimes labeled as "Error." - The final row is for the sum of squares based on the total deviation of each score from the grand mean. Note, however, that computer printouts will sometimes use a different order for the columns and will sometimes omit either SS or MS, but not both.
planned contrast
comparison in which the particular means to be compared were decided in advance. also called planned comparison.
principles of the structural model: dividing up the deviations the structural model is all about
deviations - to start with, there is the deviation of a score from the grand mean - in the criminal record example earlier in the chapter, the grand mean of the 15 scores was 85/15 = 5.67. - the deviation from the grand mean is just the beginning - deviation from the grand mean as having two parts: 1. the deviation of the score from the mean of its group 2. the deviation of the mean of its group from the grand mean ex. Consider a participant in the criminal record study who rated the defendant's guilt as a 10. - The grand mean of all participants' guilt ratings was 5.67. This person's score has a total deviation of 4.33 (that is, 10 - 5.67 = 4.33) - The mean of the Criminal Record group by itself was 8. Thus, the deviation of this person's score from his or her group's mean is 2 (that, 10 - 8 = 2), and the deviation of that group's mean from the grand mean is 2.33 (that, 8 - 5.67 = 2.33). Note that these two deviations (2 and 2.33) add up to the total deviation of 4.33. This is shown in Figure 9-7. We encourage you to study this figure until you grasp it well.
if you are doing your analyses on a computer, it gives
exact significance probabilities as part of the output—that is, it might give a p of .037 or .0054, not just whether you are beyond the .05 or .01 level.
