Psy10B Final

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

reverse causality

- Even if you think that X is predicting Y (and talk about it that way, and presume it is the case when you conduct your correlation analysis), it could be just as likely that Y actually causes X

each test has a risk of a Type I error, and the more tests you do, the greater the risk. For this reason, researchers often make a distinction between the testwise alpha level and the experimentwise alpha level

.

Specifically, when you obtain a significant F-ratio (reject H0), it simply indicates that somewhere among the entire set of mean differences there is at least one that is statistically significant

. In other words, the overall F-ratio only tells you that a significant difference exists; it does not tell exactly which means are significantly different and which are not.

An independent-measures t test produced a t statistic with df = 20. If the same data had been evaluated with an analysis of variance, what would be the df values for the F-ratio?

1, 20

The repeated-measures ANOVA is used to evaluate mean differences in two general research situations:

1. An experimental study in which the researcher manipulates an independent variable to create two or more treatment conditions, with the same group of individuals tested in all of the conditions. 2. A nonexperimental study in which the same group of individuals is simply observed at two or more different times.

In computing degrees of freedom, there are two important considerations to keep in mind:

1. Each df value is associated with a specific SS value. 2. Normally, the value of df is obtained by counting the number of items that were used to calculate SS and then subtracting 1. For example, if you compute SS for a set of n scores, then df = n - 1.

Generally, H0 falls into one of the following categories:

1. No Preference, Equal Proportions The null hypothesis often states that there is no preference among the different categories. In this case, H0 states that the population is divided equally among the categories. 2. No Difference from a Known Population The null hypothesis can state that the proportions for one population are not different from the proportions than are known to exist for another population.

there are situations for which which transforming scores into categories might be a better choice.

1. Occasionally, it is simpler to obtain category measurements 2. The original scores may violate some of the basic assumptions that underlie certain statistical procedures 3. The original scores may have unusually high variance 4. Occasionally, an experiment produces an undetermined, or infinite, score.

the numerator of the f-ratio: between treatments variance

1. Systematic Differences Caused by the Treatments. It is possible that the different treatment conditions really do have different effects and, therefore, cause the individuals' scores in one condition to be higher (or lower) than in another. Remember that the purpose for the research study is to determine whether or not a treatment effect exists. 2.Random, Unsystematic Differences. Even if there is no treatment effect, it is possible for the scores in one treatment condition to be different from the scores in another. these differences are random and unsystematic and are classified as error variance.

A correlation is a numerical value that describes and measures three characteristics of the relationship between X and Y. These three characteristics are as follows

1. The Direction of the Relationship The sign of the correlation, positive or negative, describes the direction of the relationship. 2. The Form of the Relationship- The most common use of correlation is to measure straight-line relationships. 3.The Strength or Consistency of the Relationship Finally, the correlation measures the consistency of the relationship. For a linear relationship, for example, the data points could fit perfectly on a straight line. Every time X increases by one point, the value of Y also changes by a consistent and predictable amount.

line serves several purposes.

1. The line makes the relationship between X and. Y easier to see. 2. The line identifies the center, or central tendency, of the relationship, just as the mean describes central tendency for a set of scores. Thus, the line provides a simplified description of the relationship. 3. Finally, the line can be used for prediction. The line establishes a precise, one-to- one relationship between each X value and a corresponding Y value

The two-factor ANOVA is composed of three distinct hypothesis tests:

1. The main effect of factor A (often called the A-effect). Assuming that factor A is used to define the rows of the matrix, the main effect of factor A evaluates the mean differences between rows. 2. The main effect of factor B (called the B-effect). Assuming that factor B is used to define the columns of the matrix, the main effect of factor B evaluates the mean differences between columns. 3. The interaction (called the A × B interaction). The interaction evaluates mean differences between treatment conditions that are not predicted from the overall main effects from factor A or factor B.

measures difference causes by (within)

1. random, unsystematic factors

A researcher reports an F-ratio with df = 3, 36 for an independent-measures experi- ment. How many treatment conditions were compared in this experiment?

4

outliers

An outlier is an individual with X and/or Y values that are substantially different (larger or smaller) from the values obtained for the other individuals in the data set - The data point of a single outlier can have a dramatic influence on the value obtained for the correlation.

The process of conducting pairwise comparisons involves performing a series of separate hypothesis tests, and each of these tests includes the risk of a Type I error.

As you do more and more separate tests, the risk of a Type I error accumulates and is called the experimentwise alpha level

How does MSerror in the denominator of the F-ratio increase statistical power?

Because it eliminates individual differences, it is able to detect a treatment effect easier; due to the fact that it will also be a smaller denominator

Which of the following can often help reduce the variance caused by individual differences in a single-factor design?

Create a factorial design using a participant variable (such as age) as a second factor.

Because the Pearson correlation describes the pattern formed by the data points, any factor that does not change the pattern also does not change the correlation.

For example, if 5 points were added to each of the X values in Figure 15.4, then each data point would move to the right. However, because all of the data points shift to the right, the overall pattern is not changed, it is simply moved to a new location.

Ho Version 1

For this version of H0, the data are viewed as a single sample with each individual measured on two variables. The goal of the chi-square test is to evaluate the relationship between the two variables. The null hypothesis states that there is no relationship. the alternative hypothesis states there is a relationship between the two variables

Ho version 2

For this version of H0, the data are viewed as two (or more) separate samples representing two (or more) populations or treatment conditions. The goal of the chi-square test is to determine whether there are significant differences between the populations. -The null hypothesis for the chi-square test states that the populations have the same proportions (same shape). -The alternative hypothesis, H1, simply states that the populations have different proportions

hypothesis for regression

H0: the slope of the regression equation (b or beta) is zero H1; the slope of the regression equation (b or beat) is not zero

The linear equation you obtain is then used to generate predicted Y values for any known value of X.

However, it should be clear that the accuracy of this prediction depends on how well the points on the line correspond to the actual data points—that is, the amount of error between the predicted values, Yˆ, and the actual scores, Y values.

However, removing individual differences is an advantage only when the treatment effects are reasonably consistent for all of the participants.

If the treatment effects are not consistent across participants, the individual differences tend to disappear and value in the denominator is not notice- ably reduced by removing them.

you should notice that a correlation can never be larger than +1.00 or smaller than -1.00.

If your calculations produce a value outside this range, then you should realize immediately that you have made a mistake.

Why is MSerror smaller than MSwithin?

In MSerror, individual differences are eliminated because it's a repeated-measures design

The denominator of the F-ratio: error variance

In a repeated-measures design, however, it is possible that individual differences can cause systematic differences between the scores within treatments. -To eliminate the individual differences from the denominator of the F-ratio, we measure the individual differences and then subtract them from the rest of the variability. The variance that remains is a measure of pure error without any systematic differences that can be explained by treatment effects or by individual differences.

post hoc test enables you to go back through the data and compare the individual treatments two at a time.

In statistical terms, this is called making pairwise comparisons.

These three variances are computed as follows:

MS =SSA /dfA MS =SSB / dfB MS =SSAxB /dfAxB

sum of squares is commonly called SSresidual because it is based on the remaining distance between the actual Y scores and the predicted values.

SSresidual= Σ(Y - Yˆ)2

In summary, when treatment effects are consistent from one individual to another, the individual differences also tend to be consistent and relatively large.

The large individual differences get subtracted from the denominator of the F-ratio producing a larger value for F and increasing the likelihood that the F-ratio will be in the critical region.

regression equation for Y is the linear equation Yˆ = bX + a (16.5) where the constant b is determine and the constant a is determined

This equation results in the least squared error between the data points and the line.

Under what circumstances would a study using a repeated-measures ANOVA have a distinct advantage over a study using an independent-measures ANOVA?

When there are few participants available and individual differences are large.

The sum of all the scores in the research study (the grand total) is identified by G.

You can compute G by adding up all N scores or by adding up the treatment totals: G = ΣT.

the sum of products of deviations

a procedure for measuring the amount of covariability between X and Y; the sum of XY - (Sum of X)(Sum of Y)/ n

sum of t^2

add all t and raise to the power of 2

solutions for b and a

b =SP/SSX where SP is the sum of products and SSX is the sum of squares for the X scores. -a = MY - bMX

testwise alpha level

is the risk of a Type I error, or alpha level, for an individual hypothesis test.

ANOVA is necessary to evaluate mean differences among three or more treatments, in order to

minimize risk of type 1 error

The no-difference hypothesis is used when a specific population distribution is already known.

the alternative hypothesis (H1) simply states that the population proportions are not equal to the values specified by the null hypothesis.

Factors that Influence Outcome of a Repeated-Measures ANOVA

- If treatment effects are not consistent across participants, the individual differences tend to disappear and value in the denominator is not notably reduced by them - The variance of the scores

prediction correlation

- If two variables are known to be related in some systematic way, it is possible to use one of the variables to make accurate predictions about the other. - The process of using relationships to make predictions is called regression and is discussed in the next chapter.

Third variable problem

- it is possible that another variable creates a relation between your X and Y variables - These can sometimes create spurious correlations: Variables look correlated but have no actual causal relationship

interactions

-Any "extra" mean differences that are not explained by the main effects -between two factors occurs whenever the mean differences between individual treatment conditions, or cells, are different from what would be predicted from the overall main effects of the factors. effect between the factors (not the levels of the factors!) 2 factors = 1 interaction 3 factors= 4 interactions (1st + 2nd, 2nd + 3rd, 1st + 3rd, & all of the main effects together)

Reasons to measure dependent variable categorically

-Sometimes it is simpler than measuring numerically -Numerical measurements might violate distribution assumptions for some tests -Numerical scores might have very high variance -Undetermined of infinite scores

As a final note, we should point out that the evaluation of simple main effects is used to account for the interaction as well as the overall main effect for one factor.

-The evaluation of the simple main effects demonstrates this dependency. -Thus, the analysis of simple main effects provides a detailed evaluation of the effects of one factor including its interaction with a second factor.

Factors that Influence the Outcome of a Repeated-Measures ANOVA

-sample size has little or no effect on measures of effect size. - the bigger the treatment effect, the more likely it is to be significant. - a treatment effect demonstrated with a large sample is more convincing that an effect obtained with a small sample. -The role of variance in the repeated-measures ANOVA, however, is somewhat more complicated,

nonparametric test

-when the assumptions of a test are violated, the test may lead to an erroneous interpretation of the data. Fortunately, there are several hypothesis- testing techniques that provide alternatives to parametric tests. an inferential statistical analysis that is not based on a set of assumptions about the population -chi-square

Specifically, t tests are limited to situations in which there are only two treatments to compare. The major advantage of ANOVA is that it can be used to compare two or more treatments.

.

The independent-measures ANOVA requires the same three assumptions that were necessary for the independent-measures t hypothesis test:

1. The observations within each sample must be independent . 2. The populations from which the samples are selected must be normal. 3. The populations from which the samples are selected must have equal variances (homogeneity of variance).

Although regression equations can be used for prediction, a few cautions should be considered whenever you are interpreting the predicted values.

1. The predicted value is not perfect (unless r = +1.00 or -1.00). 2. The regression equation should not be used to make predictions for X values that fall outside the range of values covered by the original data.

When you obtain a nonzero correlation for a sample, the purpose of the hypothesis test is to decide between the following two interpretations. .

1. There is no correlation in the population (ρ = 0) and the sample value is the result of sampling error. Remember, a sample is not expected to be identical to the popu- lation. There always is some error between a sample statistic and the corresponding population parameter. This is the situation specified by H0. 2. The nonzero sample correlation accurately represents a real, nonzero correlation in the population. This is the alternative stated in H1

A typical situation in which ANOVA would be used. Three separate samples are obtained to evaluate the mean differences among three populations (or treatments) with unknown means.`Specifically, we must decide between two interpretations:

1. There really are no differences between the populations (or treatments). The observed differences between the sample means are caused by random, unsystematic factors (sampling error) that differentiate one sample from another. 2. The populations (or treatments) really do have different means, and these population mean differences are responsible for causing systematic differences between the sample means.

post hoc test are done

1. You reject H0 and 2. there are three or more treatments (k ≥ 3). Rejecting H0 indicates that at least one difference exists among the treatments. If there are only two treatments, then there is no question about which means are different and, there- fore, no need for posttests. However, with three or more treatments (k ≥ 3), the problem is to determine exactly which means are significantly different.

there are four additional consideration that you should bear in mind when encountering correlations

1. correlation and causation 2. correlation and restricted range 3.outliers 4. correlation and the strength of the relationship

measures difference caused by (between)

1. systematic treatment effects 2. random, unsystematic factors (Chance = differences due to random factors including) - individual difference, experimental error

When the null hypothesis is true for an ANOVA, what is the expected value for the F-ratio?

1.00

there are two possible explanation for between-treatment difference

1.The differences between treatments are not caused by any treatment effect but are simply the naturally occurring, random and unsystematic differences that exist between one sample and another. That is, the differences are the result of sampling error. 2. The differences between treatments have been caused by the treatment effects. For example, if using a telephone really does interfere with driving performance, then scores in the telephone conditions should be systematically lower than scores in the no-phone condition.

A research report concludes that there are significant differences among treatments, with "F(2,27) = 8.62, p < .01, η2 = 0.28." If the same number of participants was used in all of the treatment conditions, then how many individuals were in each treatment?

10 -dfbetween = 2 = k-1 =k = 3 within = 27 27= 3N - 3 = 30= 3N = 10

he results of a repeated-measures ANOVA are reported as follows, F(2, 24) = 1.12, p > .05. How many individuals participated in the study?

13

A researcher uses a repeated-measures ANOVA to test for mean differences among three treatment conditions using a sample of n = 10 participants. What are the df values for the F-ratio from this analysis?

2, 18

1. The formula for chi-square involves adding squared values, so you can never obtain a negative value. Thus, all chi-square values are zero or larger.

2. When H0 is true, you expect the data ( fo values) to be close to the hypothesis ( fe values). Thus, we expect chi-square values to be small when H0 is true.

A two-factor study has 2 levels of Factor A and 3 levels of Factor B. Because the ANOVA produces a significant interaction, the researcher decides to evaluate the simple mean effect of Factor A for each level of Factor B. How many F-ratios will this require?

3

An analysis of variance is used to evaluate the mean differences for a research study comparing four treatment conditions with a separate sample of n = 5 in each treatment. The analysis produces SSwithin treatments = 32, SSbetween treatments = 40, and SStotal = 72. For this analysis, what is MSwithin treatments?

32/ 16

An analysis of variance produces SSbetween treatments = 40 and MSbetween treatments = 10. In this analysis, how many treatment conditions are being compared?

5 10 = 40 / 5- 1 = treatment is 5

The following table shows the results of an analysis of variance comparing three treatment conditions with a sample of n = 7 participants in each treat- ment. Note that several values are missing in the table. What is the missing value for SStotal?

56 MS given 2 k=3 N=21 Dfwithin= 18 so MS 2 = SSwithin/Dfwithin= 2= SSwithin/18 = 36=SSwithin SSbetween= SSTotal - SSwithin = 20 = SStotal - 36 = 56 = SStotal

How many separate samples would be needed for a two-factor, independent- measures research study with 2 levels of factor A and 3 levels of factor B?

6

for a repeated-measures ANOVA, the variability from the individual differences is removed before computing η2. As a result, η2 is computed as

= SSbetween treatments / SSbetween + SSerror -the denominator consists of the variability that is explained by the treatment differences plus the other unexplained variability. -This result means that of the variability in the data η2 (except for the individual differences) is accounted for by the differences between treatments.

parametric

A category of statistical techniques commonly used to analyze interval (continuous) data such as height, weight, or temperature. Parametric techniques are generally more powerful than non-parametric procedures. -require a numerical score for each individual in the sample. The scores then are added, squared, averaged, and other- wise manipulated using basic arithmetic. In terms of measurement scales, parametric tests require data from an interval or a ratio scale

correlation and the strength of the relationship

A correlation measures the degree of relationship between two variables on a scale from 0-1.00 - Although this number provides a measure of the degree of relationship, many researchers prefer to square the correlation and use the resulting value to measure the strength of the relationship.

A sample correlation near zero supports the conclusion that the population correlation is also zero.

A sample correlation that is substantially different from zero supports the conclusion that there is a real, nonzero correlation in the population.

ANOVA summary table

A table that shows the source of variability (between treatments, within treatments, and total variability), SS, df, MS, and F.

For a repeated-measures ANOVA, Tukey's HSD and the Scheffé test can be used in the exact same manner as was done for the independent-measures

ANOVA, provided that you substitute MSerror in place of MSwithin treatments in the formulas and use dferror in place of dfwithin treatments when locating the critical value in a statistical table. (k,dferror)

. A repeated-measures study is economical in that the research requires relatively few participants.

Also, a repeated-measures design eliminates or minimizes most of the problems associated with individual differences.

Two variables are independent when there is no consistent, predictable relationship between them. In this case, the frequency distribution for one variable is not related to (or dependent on) the categories of the second variable.

As a result, when two variables are independent, the frequency distribution for one variable will have the same shape (same proportions) for all categories of the second variable.

A perfect correlation always is identified by a correlation of 1.00 and indicates a perfectly consistent relationship. For a correlation of 1.00 (or -1.00), each change in X is accompanied by a perfectly predictable change in Y.

At the other extreme, a correlation of 0 indicates no consistency at all. For a correlation of 0, the data points are scattered randomly with no clear trend Intermediate values between 0 and 1 indicate the degree of consistency.

In general, the variance for a large sample (large df) provides a more accurate estimate of the population variance.

Because the precision of the MS values depends on df, the shape of the F distribution also depends on the df values for the numerator and denominator of the F-ratio

A repeated-measures design also allows you to remove individual differences from the variance in the denominator of the F-ratio.

Because the same individuals are measured in every treatment condition, it is possible to measure the size of the individual differences.

Conceptually, the standard error of estimate is very much like a standard deviation:

Both provide a measure of standard distance. Also, you will see that the calculation of the standard error of estimate is very similar to the calculation of standard deviation.

In the scatter plot, the values for the X variable are listed on the horizontal axis and the Y values are listed on the vertical axis.

Each individual is represented by a single point in the graph so that the horizontal position corresponds to the individual's X value and the vertical position corresponds to the Y value. The value of a scatter plot is that it allows you to see any patterns or trends that exist in the data.

For the independent-measures ANOVA, the F-ratio has the following structure:

F = systematic treatment effects + random, unsystematic differences / random, unsystematic differences

the goal of the analysis is to determine whether the data provide evidence for a treatment effect. If there is no treatment effect, the numerator and denominator are both measuring the same random, unsystematic variance and the F-ratio should produce a value near 1.00. On the other hand, the existence of a treatment effect should make the numerator substantially larger than the denominator and result in a large value for the F-ratio. For the repeated-measures design, the individual differences are eliminated or subtracted out, and the resulting F-ratio is structured as follows:

F = treatment effects + error (excluding individual difference) / error (excluding individual differences)

The final calculation for ANOVA is the F-ratio, which is composed of two variances:

F = variance between treatments /variance within treatments

A repeated-measures ANOVA produced an F-ratio of F = 9.00 with df = 1, 14. If the same data were analyzed with a repeated-measures t test, what value would be obtained for the t statistic?

F=t^2 sqrt of 9 = 3

finally all three f ratios for two factor anova

FA= MSA / MSwithin treatments FB= MSB / MSwithin treatment FAxB= MSAxB= MSwithin treatment

The chi-square test for independence uses the same basic logic that was used for the goodness-of-fit test.

First, a sample is selected, and each individual is classified or cat- egorized. Because the test for independence considers two variables, every individual is classified on both variables, and the resulting frequency distribution is presented as a two- dimensional matrix -the frequencies in the sample distribution are called observed frequencies and are identified by the symbol fo. The next step is to find the expected frequencies, or fe values, for this chi-square test. As before, the expected frequencies define an ideal hypothetical distribution that is in perfect agreement with the null hypothesis.

The letter k is used to identify the number of treatment conditions—that is, the number of levels of the factor.

For an independent-measures study, k also specifies the number of separate samples.

Occasionally, you have a choice between using a parametric and a nonparametric test. Changing to a nonparametric test usually involves transforming the data from numerical scores to nonnumerical categories.

For example, you could start with numerical scores measuring self-esteem and create three categories consisting of high, medium, and low self-esteem

The null hypothesis for this F-ratio simply states that there is no interaction:

H0: There is no interaction between factors A and B. The mean differences between treatment conditions are explained by the main effects of the two factors. H1: There is an interaction between factors. The mean differences between treat- ment conditions are not what would be predicted from the overall main effects of the two factors.

the purpose for the test is to determine whether the sample correlation represents a real relationship or is simply the result of sampling error.

H0: the slope of the regression equation (b or beta) is zero

The evaluation of main effects accounts for two of the three hypothesis tests in a two- factor ANOVA. We state hypotheses concerning the main effect of factor A and the main effect of factor B and then calculate two separate F-ratios to evaluate the hypotheses.

H0: μA1 = μA2 H1: μA1 not equal μA2 -H0: μB1 = μB2 H1: μB1 not equal μB2

The process of testing the significance of mean differences within one column (or one row) of a two-factor design is called testing simple main effects.

H0: μpaper = μscreen example null hypothesis of a simple main effect -f = MSbetween treatments for the two treatments in row 1 / MSwithin treatments from the original ANOVA -SSbetween treatments =sum of T^2/n - G^2 / N =df = 1 sime SS is based on only two treatments

null hypothesis for one factor ANOVA

H0:μ1 =μ2 =μ3 - in other words the null hypothesis states that the "independent variable" has no effects on "dependent variable"

The hypotheses for the repeated-measures ANOVA are exactly the same as those for the independent-measures ANOVA

H0:μ1 =μ2 =μ3 =... -According to the null hypothesis, any differences that may exist among the sample means are not caused by systematic treatment effects but rather are the result of random and unsystematic factors. -The alternative hypothesis states that there are mean differences among the treatment conditions. Rather than specifying exactly which treatments are different, we use a generic version of H1, which simply states that differences exist: H1: At least one treatment mean (μ) is different from another.

alternative hypothesis for one factor ANOVA

H1: There is at least one mean difference among the populations. - states that the treatment condition are not all the same, there is a real treatment effect

Whenever you obtain a nonzero value for a sample correlation, you will also obtain real, numerical values for the regression equation.

However, if there is no real relationship in the population, both the sample correlation and the regression equation are meaningless—they are simply the result of sampling error and should not be viewed as an indication of any relationship between X and Y.

When H0 is true, the numerator and denominator of the F-ratio are measuring the same variance. In this case, the two sample variances should be about the same size, so the ratio should be near 1

In other words, the distribution of F-ratios should pile up around 1.00

we can define the best-fitting line as the one that has the smallest total squared error. For obvious reasons, the resulting line is commonly called the least-squared-error solution.

In symbols, we are looking for a linear equation of the form Yˆ = bX + a

When the treatment does have an effect, causing systematic differences between samples, then the combination of systematic and random differences in the numerator should be larger than the random differences alone in the denominator.

In this case, the numerator of the F-ratio should be noticeably larger than the denominator, and we should obtain an F-ratio noticeably larger than 1.00. Thus, a large F-ratio is evidence for the existence of systematic treatment effects; that is, there are significant differences between treatments.

The chi-square statistic may also be used to test whether there is a relationship between two variables.

In this situation, each individual in the sample is measured or classified on two separate variables.

A negative value for a correlation indicates .

Increases in X tend to be accompanied by decreases in Y

The correlation for a set of X and Y scores is r = 0.60. The scores are separated into two groups, with one group consisting of individuals with X values that are equal to or above the median and the other group consisting of individuals with X values that are below the median. If the correlation is computed for the group with X values below the median, how will the correlation compare with the correlation for the full set of scores?

It is impossible to predict how the correlation for the smaller group will be related to the correlation for the entire group.

MSWithn

MSbetween/MSwithin

stage 2 of final calculation of repeated measured f-ratio

MSerror = SSerror / dferror -

The process of testing the significance of a regression equation is called analysis of regression. and is very similar to ANOVA

MSregression = SSregression/ Dfregression with dfregression= 1 MS residual = SSresidual/SSresidual with dfresidaul= n-1n f= MSregression/MSresidual

The within-treatments variance is called a mean square, or MS, and is computed as follows:

MSwithin treatments = SSwithin treatments/ dfwithin treatments

there is no interaction; that is, there are no extra mean differ- ences that are not explained by the main effects.

Mean differences that are not explained by the main effects are an indication of an interaction between the two factors.

. All of the parametric tests that we have examined so far require numerical scores. For nonparametric tests, on the other hand, the participants are usually just classified into categories such as Democrat and Republican, or High, Medium, and Low IQ.

Note that these classifications involve measurement on nominal or ordinal scales, and they do not produce numerical values that can be used to calculate means and variances

The first step is to determine the total variability for the entire set of data. To compute the total variability, we combine all the scores from all the separate samples to obtain one general measure of variability for the complete experiment

Once we have measured the total variability, we can begin to break it apart into separate components. The word analysis means dividing into smaller parts. Because we are going to analyze variability, the process is called analysis of variance. This analysis process divides the total variability into two basic components.

you should always expect some error between a sample correlation and the population correlation it represents.

One implication of this fact is that even when there is no correlation in the population (ρ = 0), you are still likely to obtain a nonzero value for the sample correlation. This is particularly true for small samples.

In a two-factor ANOVA, which of the following is not computed directly but rather is found by subtraction?

SSAxB

Between-Treatments Sum of Squares, SSbetween treatments

SSbetween = SStotal - SSwithin

MSbetween

SSbetween/dfbetween

For the repeated-measures ANOVA, SSerror is found by _____.

SSwithin - SSbetween subjects

The MSwithin used in the denominator of the Fcolumns calculation is calculated using:

SSwithin/dfwithin

The positive sign for the correlation indicates that the points are clustered around a line that slopes up to the right.

Second, the high value for the correlation (near 1.00) indicates that the points are very tightly clustered close to the line. Thus, the value of the correlation describes the relationship that exists in the data.

the calculation of the Y-intercept ensures that the regression line passes through the point defined by the mean for X and the mean for Y. That is, the point identified by the coordinates MX, MY will always be on the line.

Second, the sign of the correlation (+ or -) is the same as the sign of the slope of the regression line. Specifically, if the correlation is positive, then the slope is also positive and the regression line slopes up to the right

whenever the correlation between two variables is significant, you can conclude that the regression equation is also significant.

Similarly, if a correlation is not significant, the regression equation is also not significant

However, disadvantages also exist. These take the form of order effects, such as fatigue, that can make the interpretation of the data difficult.

Solution= counterbalance order

standard error of estimate = The standard error of estimate provides a measure of how accurately the regres- sion equation predicts the Y value

Sqrt of SSresidual/ df = Sqrt of Σ(Y - Yˆ)2/ n- 2

distribution-free tests

Tests that make no assumptions about the theoretical distribution of variables in the population from which a sample is drawn (i.e. the sampling distribution).

reliability correlation

That is, a reliable measurement procedure will produce the same (or nearly the same) scores when the same individuals are measured twice under the same conditions. -When reliability is high, the correlation between two measurements should be strong and positive.

Analysis of Sum of Squares (SS)

The ANOVA requires that we first compute a total sum of squares and then partition this value into two components: between treatments and within treatments.

Finally, if you suspect that one of the assumptions for the independent-measures ANOVA has been violated, you can still proceed by transforming the original scores to ranks and then using an alternative statistical analysis known as the Kruskal- Wallis test, which is designed specifically for ordinal data.

The Kruskal-Wallis test also can be useful if large sample variance prevents the independent-measures ANOVA from producing a significant result.

The final calculation in the analysis is the F-ratio, which is a ratio of two variances. Each variance is called a mean square, or MS, and is obtained by dividing the appropriate SS by its corresponding df value.

The MS in the numerator of the F-ratio measures the size of the differences between treatments and is calculated as MSbetween treatments= SSbetween treatments / dfbetween treatments -The denominator of the F-ratio measures how much difference is reasonable to expect if there are no systematic treatment effects and the individual differences have been removed.

Reporting the results of analysis of variance example

The analysis of variance indicates that there are significant differences among the three strategies for studying, F(2, 15) = 7.16, p < .05, η2 = 0.488.

least squares solution

The distance between this predicted value and the actual Y value in the data is determined by distance = Y - (Y hat) -This distance measures the error between the line and the actual data -The result is a measure of overall squared error between the line and the data: total squared error = ∑(Y - Yˆ )2

For ANOVA, the denominator of the F-ratio is called the error term.

The error term provides a measure of the variance caused by random, unsystematic differences. When the treatment effect is zero (H0 is true), the error term measures the same sources of variance as the numerator of the F-ratio, so the value of the F-ratio is expected to be nearly equal to 1.00.

Which of the following accurately describes the two stages of a repeated-measures analysis of variance?

The first stage is identical to the independent-measures analysis and the second stage removes individual differences from the denominator of the F-ratio.

chi-square tests, it is customary to present the scale of measurement as a series of boxes, with each box corresponding to a separate category on the scale.

The frequency corresponding to each category is simply presented as a number written inside the box.

Reporting the Results of a Repeated-Measures AnOvA example

The means and variances for the three strategies are shown in Table 1. A repeated- measures analysis of variance indicated significant mean differences in the par- ticipants' test scores for the three study strategies, F (2, 10) = 19.09, p < .01, η2 = 0.79.

Although the typical chi-square distribution is positively skewed, there is one other factor that plays a role in the exact shape of the chi-square distribution—the number of categories.

The more categories you have, the more likely it is that you will obtain a large sum for the chi-square value. On average, chi-square will be larger when you are add- ing values from 10 categories than when you are adding values from only 3 categories.

expected frequencies are calculated, hypothetical values and the numbers that you obtain may be decimals or fractions.

The observed frequencies, on the other hand, always represent real individuals and always are whole numbers.

Occasionally researchers will transform numerical scores into nonnumerical categories and use a nonparametric test instead of the standard parametric statistic. Which of the following is not a reason for making this transformation?

The original scores form a very large sample.

Reporting the Results for Chi-Square

The participants showed significant preferences among the four orientations for hanging the painting, χ2(3, n = 50) = 8.08, p < .05. example

In a two-factor experiment with 2 levels of factor A and 2 levels of factor B, three of the treatment means are essentially identical and one is substantially different from the others. What result(s) would be produced by this pattern of treatment means?

The pattern would produce main effects for both A and B, and an interaction.

For ANOVA, the calculation and the concept of the percentage of variance is extremely straightforward. Specifically, we determine how much of the total SS is accounted for by the SSbetween treatments. (called n^2 eta squared)

The percentage of variance accounted for = SSbetween treatments/ SStotal η^2 = SSbtween/SStotal

If the null hypothesis is false, the F-ratio should be much greater than 1.00.

The problem now is to define precisely which values are "around 1.00" and which are "much greater than 1.00

The data for a chi-square test are remarkably simple. There is no need to calculate a sample mean or SS; you just select a sample of n individuals and count how many are in each category.

The resulting values are called observed frequencies. The symbol for observed frequency is fo. total sample size: Σ fo = n.

In the first stage, the total variance is partitioned into two components: between-treatments variance and within-treatments variance. This stage is identical to the analysis that we conducted for the independent- measures

The second stage of the analysis is intended to remove the individual differences from the denominator of the F-ratio. In the second stage, we begin with the variance within treat- ments and then measure and subtract out the between-subject variance, which measures the size of the individual differences. The remaining variance, often called the residual variance, or error variance, provides a measure of how much variance is reasonable to expect after the treatment effects and individual differences have been removed.

in a two-factor ANOVA, what is the implication of a significant A × B interaction?

The significance of the interaction has no implications for the main effects.

In the general linear equation, the value of b is called the slope

The slope determines how much the Y variable changes when X is increased by one point.

regression

The statistical technique for finding the best-fitting straight line for a set of data is called regression, and the resulting straight line

The sum of the scores (ΣX) for each treatment condition is identified by the capital letter T (for treatment total).

The total for a specific treatment can be identified by adding a numerical subscript to the T

What happens in the second stage of a two-factor ANOVA?

The variability between treatments is divided into the two main effects and the interaction.

In the numerator of the F-ratio, the between- treatments variance measures the actual mean differences between the treatment conditions.

The variance in the denominator is intended to measure how much difference is reasonable to expect if there are no systematic treatment effects and no systematic individual differences. the variance in the denominator is called the error variance.

F-ratio has the same basic structure as the t statistic but is based on variance instead of sample mean difference. The variance in the numerator of the F-ratio provides a single number that measures the differences among all of the sample means.

The variance in the denominator of the F-ratio, like the standard error in the denominator of the t statistic, measures the mean differences that would be expected if there is no treatment effect.

In an independent-measures ANOVA, individual differences contribute to the variance in the numerator and in the denominator of the F-ratio. For a repeated- measures ANOVA, what happens to the individual differences in the denominator of the F-ratio

They are measured and subtracted out during the analysis

In an independent-measures ANOVA, individual differences contribute to the variance in the numerator and in the denominator of the F-ratio. For a repeated- measures ANOVA, what happens to the individual differences in the numerator of the F-ratio?

They do not exist because the same individual particpants in all of the treatments

For the repeated-measures ANOVA, there is an additional assumption, called homogeneity of covariance. Basically, it refers to the requirement that the relative standing of each subject be maintained in each treatment condition.

This assumption is violated if the effect of the treatment is not consistent for all of the subjects or if order effects exist for some, but not other, subjects.

when a treatment effect does exist, it contributes only to the numerator and should produce a large value for the F-ratio.

Thus, a large value for F indicates that there is a real treatment effect and therefore we should reject the null hypothesis.

the outcome for any one of the three tests is totally unrelated to the outcome for either of the other two.

Thus, it is possible for data from a two-factor study to display any possible combination of significant and/or not significant main effects and interactions.

The next step is to subtract out the individual differences to obtain the measure of error that forms the denominator of the F-ratio.

Thus, the final step in the analysis of SS is SSerror = SSwithin treatments - SSbetween subjects

a significant mean difference simply indicates that the difference observed in the sample data is very unlikely to have occurred just by chance.

Thus, the term significant does not necessarily mean large, it simply means larger than expected by chance.

The principle of sampling error is that there is typically some discrepancy or error between the value obtained for a sample statistic and the corresponding population parameter

Thus, when there is no relationship whatsoever in the population, a correlation of ρ = 0, you are still likely to obtain a nonzero value for the sample correlation.

. In the case of a two-factor study, any main effects that are observed in the data must be evaluated with a hypothesis test to determine whether they are statistically significant effects.

Unless the hypothesis test demonstrates that the main effects are significant, you must conclude that the observed mean differences are simply the result of sampling error.

The total number of scores in the entire study is specified by a capital letter N.

When all the samples are the same size (n is constant), N = kn

The removal of individual differences from the analysis becomes an advantage in situations in which very large individual differences exist among the participants being studied.

When individual differences are large, the presence of a treatment effect may be masked if an independent-measures study is performed. In this case, a repeated-measures design would be more sensitive in detecting a treatment effect because individual differences do not influence the value of the F-ratio.

When there is no treatment effect, the F-ratio is balanced because the numerator and denominator are both measuring exactly the same variance. In this case, the F-ratio should have a value near 1.00

When research results produce an F-ratio near 1.00, we conclude that there is no evidence of a treatment effect and we fail to reject the null hypothesis

The mean differences among the levels of one factor are referred to as the main effect of that factor.

When the design of the research study is represented as a matrix with one factor determining the rows and the second factor determining the columns, then the mean differences among the rows describe the main effect of one factor, and the mean differences among the columns describe the main effect for the second factor.

increasing the number of separate tests definitely increases the total, experimentwise probability of a Type I error.

Whenever you are conducting posttests, you must be concerned about the experiment- wise alpha level.

Thus, an F-ratio near 1.00 indicates that the differences between treatments (numerator) are random and unsystematic, just like the differences in the denominator.

With an F-ratio near 1.00, we conclude that there is no evidence to suggest that the treatment has any effect.

ANOVA, however, we want to compare differences among two or more sample means.

With more than two samples, the concept of "difference between sample means" becomes difficult to define or measure. -For example, if there are only two samples and they have means of M = 20 and M = 30, then there is a 10-point difference between the sample means. Suppose, however, that we add a third sample with a mean of M = 35. Now how much difference is there between the sample means? It should be clear that we have a prob- lem. The solution to this problem is to use variance to define and measure the size of the differences among the sample means.

When there are no systematic treatment effects, the differences between treatments (numerator) are entirely caused by random, unsystematic factors. In this case, the numerator and the denominator of the F-ratio are both measuring random differences and should be roughly the same size.

With the numerator and denominator roughly equal, the F-ratio should have a value around 1.00.

The chi-square test of independence uses exactly the same chi-square formula as the test for goodness of fit

X^2= Sum of (fo - fe)^2/fe

a linear relationship between two variables X and Y can be expressed by the equation

Y = bX + a where a and b are fixed constants.

For a two-factor ANOVA, we compute three separate values for eta squared: one measuring how much of the variance is explained by the main effect for factor A, one for factor B, and a third for the interaction

before we compute the η2 for factor A, we remove the variability that is explained by factor B and the variability explained by the interaction. The resulting equation is for all factors , n^2 = SSA / SSA + SSwithin treatments

in a chi-square test for goodness of fit _____.

bothΣfe =n and Σfe =Σfo

The numerator of the F-ratio measures the size of the sample mean differences. How is this number obtained, especially when there are more than two sample means?

by computing the variance for the sample means

The chi-square statistic simply measures how well the data (fo) fit the hypothesis (fe). The symbol for the chi-square statistic is χ2. The formula for the chi-square statistic is

chi-square= X^2 = Sum of (fo- fe)^2/fe -the numerator measures how much difference there is between the data (the fO values) and the hypothesis (represented by the fe values -denominator of the chi-square statistic, is not so obvious. Why must we divide by fe before we add the category values? The answer to this question is that the obtained discrepancy between fo and fe is viewed as relatively large or relatively small depending on the size of the expected frequency

the squared correlation (r2) measures the gain in accuracy that is obtained from using the correlation for prediction. The squared correlation measures the proportion of variability in the data that is explained by the relationship between X and Y. It is sometimes called the

coefficient of determination. A correlation of r = 0.80 (or -0.80), for example, means that r2 = 0.64 (or 64%) of the variability in the Y scores can be predicted from the relationship with X.

Tukey's honestly significant difference (HSD) a post hoc test

commonly used test in psychological research. Tukey's test allows you to compute a single value that determines the minimum difference between treatment means that is necessary for significance. This value, called the honestly significant difference, or HSD, is then used to compare any two treatment conditions. If the mean difference exceeds Tukey's HSD, you conclude that there is a significant difference between the treatments. Otherwise, you cannot conclude that the treatments are significantly different. located with q is found(k,dfwithin)

Correlation and Causation

correlation does not imply causation

for the goodness-of-fit test, the degrees of freedom are deter- mined by

df = C - 1 C is the number of categories

For a repeated-measures ANOVA, the df values for the F-ratio are reported as

df = dfbetween treatments, dferror

A large discrepancy produces a large value for chi-square and indicates that H0 should be rejected. To determine whether a particular chi-square statistic is significantly large, you must first determine degrees of freedom (df) for the statistic and then consult the chi-square distribution in the appendix. For the chi-square test of independence, degrees of freedom are based on the number of cells for which you can freely choose expected frequencies.`

df= (R-1)(C-1)

A two-factor, independent-measures research study is evaluated using an analysis of variance. The F-ratio for factor A has df = 1, 40 and the F-ratio for factor B has df = 3, 40. Based on this information, what are the df values for the F-ratio for the A × B interaction?

df= 3, 40

Between-Treatments Degrees of Freedom, dfbetween.

dfbetween = k - 1 -between treatments refers to differences from one treatment to another. With three treatments, for example, we are comparing three different means (or totals) and have df = 3 - 1 = 2.

Total Degrees of Freedom, dftotal. To find the df associated with SStotal, you must first recall that this SS value measures variability for the entire set of N scores. Therefore, the df value is

dftotal = N - 1

Within-Treatments Degrees of Freedom, dfwithin.

dfwithin = Σ(n - 1) = Σdfin each treatment or dfwithin= N-k -within treatments refers to differences that exist inside the individual treatment conditions. Thus, we compute SS and df inside each of the separate treatments.

Reporting Correlations

example: There was a strong positive correlation between age and happiness (r=.61), suggesting that as age increases, so does happiness. A correlation for the data revealed a significant relationship between amount of education and annual income, r = +.65, n = 30, p < .01, two tails.

Reporting the Results of a Two-Factor AnOvA

example; The means and standard deviations for all treatment conditions are shown in Table 1. The two-factor analysis of variance showed no significant main effect for time control, F(1, 16) = 3.75, p > .05, η2 = 0.190 or for presentation mode, F(1, 16) = 3.75, p > .05, η2 = 0.190. However, the interaction between factors was significant, F(1, 16) = 10.41, p < .01, η2 = 0.394.

In the context of ANOVA, an independent variable or a quasi- independent variable is called a

factor.

A Simple Formula for Determining Expected Frequencies

fe= fcfr/n

expected frequency

for each category is the frequency value that is predicted from the proportions in the null hypothesis and the sample size (n). The expected frequencies define an ideal, hypothetical sample distribution that would be obtained if the sample proportions were in perfect agreement with the proportions specified in the null hypothesis. = fe = pm where p is the proportion stated in the null hypothesis and n is the sample size.

standard error of estimate

gives a measure of the standard distance between the predicted Y values on the regression line and the actual Y values in the data.

the differences that exist within a treatment represent random and unsystematic differences that occur when there are no treatment effects causing the scores to be different. Thus, the within- treatments variance provides a measure of

how big the differences are when H0 is true.

the denominator of the F-ratio is called the residual variance, or the error variance, and measures

how much variance is expected if there are no systematic treatment effects and no individual differences contributing to the variability of the scores.

. If an analysis of variance is used for the following data, what would be the effect of changing the value of SS2 to 100? Sz was 70 before

increase SSwithin and decrease the size of the F-ratio

when a researcher manipulates a variable to create the treatment conditions in an experiment, the variable is called an

independent variable

Individual Differences in the F-Ratio for repeated measures designs

individual differences are a part of the independent-measures F-ratio but are eliminated from the repeated-measures F-ratio. -In a repeated-measures study exactly the same individuals participate in all of the treatment conditions. Therefore, if there are any mean differences between treatments, they cannot be explained by individual differences. Thus, individual differences are automatically eliminated from the numerator of the repeated-measures F-ratio. -

The null hypothesis for correlation

is "No. There is no correlation in the population." or "The population correlation is zero." H0: ρ = 0 (There is no population correlation.) H1: ρ ≠ 0 (There is a real correlation.) H0: ρ ≤ 0 (The population correlation is not positive.) H1: ρ > 0 (The population correlation is positive.)

envelope correlation

is a line that encloses that data , often helps you to see the overall trend in the data -As a rule of thumb, when the envelope is shaped roughly like a football, the correlation is around 0.7. Envelopes that are fatter than a football indicate correla- tions closer to 0, and narrower shapes indicate correlations closer to 1.00.

correlation

is a statistical technique that is used to measure and describe the relationship between two variables. Usually the two variables are simply observed as they exist naturally in the environment—there is no attempt to control or manipulate the variables.

observed frequency

is the number of individuals from the sample who are classified in a particular category. Each individual is counted in one and only one category.

the logic of two factor anova

it has to do with main effects and interactions and the multiple factors to test

The number of scores in each treatment is identified by a lowercase letter

n

The data for a chi-square test for goodness of fit are called _________.

observed frequencies

The chi-square distribution is ______.

positively skewed with all values greater than or equal to zero

The squared correlation, r2, is called the coefficient of determina- tion because it determines what proportion of the variability in Y is predicted by the rela- tionship with X. Because r2 measures the predicted portion of the variability in the Y scores, we can use the expression (1 - r2) to measure the unpredicted portion.

predicted variability = SSregression = r^2SSy unpredicted variability = SSresidual = (1-r^2)SSy

Where and why correlations are used

prediction, validity, reliability, theory verification

Multiplying either the X or the Y values by a negative number, however, does not change the numerical value of the correlation but it does change the sign ;

produces a mirror image of the pattern and, therefore, changes the sign of the correlation.

within-treatment variance

provides a measure of the variability inside each treatment condition -Inside each treatment condition, we have a set of individuals who all receive exactly the same treatment; that is, the researcher does not do anything that would cause these individuals to have different score

when a researcher uses a non- manipulated variable to designate groups, the variable is called a

quasi-independent variable

Pearson correlation formula

r = SP/ square root of [(SSx)(SSy)]

Under what circumstances are posttests necessary?

reject the null hypothesis with k> 2 treatments

MS(variance)

s^2 = SS/df

Each of the two variances in the F-ratio is calculated using the basic formula for sample variance.

sample variance = s^2 = SS/ df

Which of the following is the correct description for a research study comparing problem-solving scores obtained for 3 different age groups of children?

single factor design

the null hypothesis for the test for independence

states that the two variables being measured are independent; that is, for each individual, the value obtained for one variable is not related to (or influenced by) the value for the second variable. This general hypothesis can be expressed in two different conceptual forms, each viewing the data and the test from slightly different perspectives.

SS

sum of( X - M )^2

independent-measures designs;

that is, studies that use a separate group of participants for each treatment condition.

The no-preference hypothesis is used in situations in which a researcher wants to determine whether there are any preferences among the categories, or whether the proportions differ from one category to another.

the alternative hypothesis (H1) simply states that the population distribution has a different shape from that specified in H0. If the null hypothesis states that the population is equally divided among three categories, the alternative hypothesis says that the population is not divided equally.

Pearson correlation measures

the degree and the direction of the linear relationship between two variables -The Pearson correlation for a sample is identified by the letter r. The corresponding corre- lation for the entire population is identified by the Greek letter rho (ρ),

Because the denominator of the F-ratio measures only random and unsystematic variability, it is called

the error term.

the levels of the factor

the individual groups or treatment conditions that are used to make up a factor For example, a study that examined performance under three different telephone conditions would have three levels of the factor

For an ANOVA, how does an increase in the sample sizes influence the likelihood of rejecting the null hypothesis and measures of effect size?

the likelihood of rejecting H0 will increase but there will be little or no effect on measures of effect size.

the no-preference null hypothesis will always produce equal fe values for all categories because the proportions (p) are the same for all categories

the no-difference null hypothesis typically will not produce equal values for the expected frequencies because the hypothesized proportions typically vary from one category to another

For a hypothesis test for the Pearson correlation, what is stated by the null hypothesis?

the population correlation is zero

When an experiment involves several different hypothesis tests, the experimentwise alpha level is

the total probability of a Type I error that is accumulated from all of the individual tests in the experiment. Typically, the experimentwise alpha level is substantially greater than the value of alpha used for any one of the individual tests. -As the number of separate tests increases, so does the experimentwise alpha level

The numerator of the F-ratio always includes the same unsystematic variability as in the error term, but it also includes any systematic differences caused by

the treatment effect.

positive correlation

the two variables tend to change in the same direction: as the value of the X variable increases from one individual to another, the Y variable also tends to increase; when the X variable decreases, the Y variable also decreases.

negative correlation

the two variables tend to go in opposite directions. As the X variable increases, the Y variable decreases. That is, it is an inverse relationship.

In an analysis of variance, the primary effect of large mean differences from one sample to another is to increase the value for

the variance between treatments

theory verification

theories make specific predictions about the relationships between variables

The two-factor ANOVA consists of three hypothesis tests, each evaluating specific mean differences: the A effect, the B effect, and the A × B interaction. ,

these are three separate tests, but you should also realize that the three tests are independent.

equivalence fo Ho version 1 and Ho version 2

these version are equivalent -stating that there is no relationship between two variables (version 1 of H0) is equivalent to stating that the distributions have equal proportions (version 2 of H0).

which of the following accurately describes the purpose of posttests?

they determine which treatment are different

the parametric test is preferred because it is more likely

to detect a real difference or a real relationship

between-treatments variance

to provide a measure of the overall differences between treatment conditions. -Notice that the variance between treatments is really measuring the differences between sample means. -we are measuring differences that could be caused by a systematic treatment effect or could simply be random and unsystematic mean differences caused by sampling error.

chi-square test for goodness of fit

uses sample data to test hypotheses about the shape or proportions of a population distribution. The test determines how well the obtained sample proportions fit the population proportions specified by the null hypothesis.

chi-square test for independence

uses the frequency data from a sample to evaluate the relationship between two variables in the population. Each individual in the sample is classified on both of the two variables, creating a two-dimensional frequency distribution matrix. The frequency distribution for the sample is then used to test hypotheses about the corresponding frequency distribution in the population.

f-distribution table

values for the numerator of the F-ratio printed across the top of the table. The df values for the denominator of F are printed in a column on the left-hand side

In analysis of variance, an MS value is a measure of ______.

variance

A correlation of zero means that the slope is also zero and the regression equation produces a horizontal line that passes through the data at a level equal to the mean for the Y values.

very individual with a positive deviation for X is predicted to have a positive deviation for Y, and everyone with a negative deviation for X is predicted to have a negative deviation for Y.

In ANOVA, it is customary to use the term mean square, or simply MS, in place of the term variance.

we now will use MS to stand for the mean of the squared deviations. For the final F-ratio we will need an MS (variance) between treatments for the numerator and an MS (variance) within treatments for the denominator.

validity correlation

what you test should be related to what you are testing

When there is a perfect linear relationship, every change in the X variable is accompanied by a corresponding change in the Y variable; every time the value of X increases, there is a perfectly predictable decrease in the value of Y.

when there is no linear relationship, a change in the X variable does not correspond to any predictable change in the Y variable. In this case, there is no covariability, and the resulting correlation is zero

Post hoc tests (or posttests) are additional hypothesis tests that are done after an ANOVA to determine exactly

which mean differences are significant and which are not.

a large value for the test statistic provides evidence that the sample mean differences (numerator) are larger than

would be expected if there were no treatment effects (denominator).

correlation and restricted range

you should not generalize any correlation beyond the range of data represented in the sample. For a correlation to provide an accurate description for the general population, there should be a wide range of X and Y values in the data. -the restricted range of scores produce, a correlation near zero

the sampling distribution of f

• A family of distributions • Each with a pair of degrees of freedom • F-values are always positive • Variance cannot be negative • If H0 is true then F ≈1 • So peak appears around 1 Will be positive (right) skewed Will not be symmetrical, w/ mean of the curve around 1 Shape of distribution will change with df • Large df will result in less spread to the right • In practical terms, leads to smaller critical values of F (closer to 1.0)


Set pelajaran terkait

ATI: Quiz: Medical-Surgical: Dermatological

View Set

Wk 3 - Practice: Ch. 11, Organizational Design: Structure, Culture

View Set

Life Provisions, Riders, Options, and Exclusions

View Set

Chapter 18: Psychological Disorders

View Set

BUS 101 - Computer Concepts Module 4 Quiz

View Set