Stats Exam 3

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

The formula for dfwithin simply...

adds up the number of scores in each treatment (the n values) and subtracts 1 for each treatment. If these two stages are done separately, you obtain: dfwithin = N - k

Directional Hypotheses and One-Tailed Tests

•In many repeated-measures and matched-subjects studies, the researcher has a specific prediction concerning the direction of the treatment effect. -This kind of directional prediction can be incorporated into the statement of the hypotheses, resulting in a directional, or one-tailed, hypothesis test.

Calculation of the variances (MS values) and the F-ratio:

MSbetween treatments = SSbetween treatments/ dfbetween treatments MSerror = SSerror/dferror F = MSbetween treatments/ MSerror

Interpreting the Results from a Two-Factor ANOVA

•For interpretation, you start with the interaction -If there is a significant interaction, the interpretation of main effects becomes more complex in a way •The main effect of Factor A is dependent on the level of Factor B -If there is not a significant interaction, you head straight to the main effects

Factors in the Outcome of a Repeated-Measures ANOVA/Measures of Effect Size

•Removing individual differences is an advantage only when the treatment effects are reasonably consistent for all of the participants. -If the treatment effects are not consistent across participants, the individual differences tend to disappear and value in the denominator is not noticeably reduced by removing them.

The F-max value computed for the sample data is compared with

•the critical value found in an F-max table. To locate the critical value in the table, you need to know: -df = n − 1 for each sample variance. The Hartley test assumes that all samples are the same size.

The final F-ratio is given by

F = variance/differences between treatments (without individual differences) / variance/differences with no treatment effect (with individual differences removed)

The Scheffe Test

The Scheffé test uses an F-ratio to evaluate the significance of the difference between any two treatment conditions. -One of the most conservative post hoc tests (smallest risk of a Type I error). •The numerator of the F-ratio is an MS between treatments that is calculated using only the two treatments you want to compare. •The denominator is the same MSwithin that was used for the overall ANOVA.

The repeated-measures ANOVA introduces only one new notational symbol.

The letter P is used to represent the total of all the scores for each individual in the study.

Each MS value equals SS/df, and the individual SS and df values are computed in

a two-stage analysis. •The first stage of the analysis is identical to the single-factor ANOVA and separates the total variability (SS and df) into two basic components: between treatments and within treatments.

The between-treatments variability measures

the magnitude of the mean differences between treatment conditions (the individual cells in the data matrix) and is computed using the basic formulas for SS between and df between where the T values (totals) are the cell totals and n is the number of scores in each cell df between = the number of cells (totals) minus one

In each of the t-score formulas...

the standard error (denominator) measures how accurately the sample statistic represents the population parameter. -In the single-sample t formula, the standard error measures the amount of error expected for a sample mean and is represented by sM.

The alternative hypothesis is that

there is an interaction between the two factors: H1: There is an interaction between factors. •To evaluate the interaction, the two-factor ANOVA first identifies mean differences that are not explained by the main effects. •The extra mean differences are then evaluated by an F-ratio with the following structure: variance (mean differences) not explained by main effects / variance (differences) expected if there are no treatment effects

The two-factor ANOVA allows us to examine

three types of mean differences within one analysis. -Traditionally, the two independent variables in a two-factor experiment are identified as factor A and factor B. •For the study presented in Table 14.1, gender is factor A, and the level of violence in the game is factor B.

Comparing Repeated-and Independent-Measures Designs

•A repeated-measures design typically requires fewer subjects than an independent-measures design. •The repeated-measures design is especially well suited for studying learning, development, or other changes that take place over time.

Assumptions underlying the Independent-Measures t Formula:

1.The observations within each sample must be independent. 2.The two populations from which the samples are selected must be normal. 3.The two populations from which the samples are selected must have equal variances.

We must decide between two interpretations:

1.There really are no differences between the populations (or treatments). The observed differences between the sample means are caused by random, unsystematic factors (sampling error). 2.The populations (or treatments) really do have different means, and these population mean differences are responsible for causing systematic differences between the sample means.

Each of the main effects hypothesis tests in a two-factor ANOVA will have its own

F-ratio and each F-ratio has the same basic structure: variance(differences) b/w the means for factor A (row means) / variance (differences) expected if there is no treatment effect and variance (differences) b/w the means for factor B (column means) / variance (differences) expected if there is no treatment effect

The final calculation for ANOVA is the

F-ratio, which is composed of two variances: F-ratio = variance between treatments / variance within treatments •Each of the two variances in the F-ratio is calculated using the basic formula for sample variance. sample variance = s2 = SS / df

F-Ratio hypotheses:

If the null hypothesis is true, we expect F to be about 1.00 If the null hypothesis is false, F should be much greater than 1.00

Statistical Hypotheses for ANOVA

In general, H0 states that there is no treatment effect. -In an ANOVA with three groups H0 could appear as: H0: μ1 = μ2 = μ3 The alternative hypothesis states that the population means are not all the same: H1: There is at least one mean difference

Two-Factor Independent-Measures ANOVA

In the context of ANOVA, an independent variable (or a quasi-independent variable) is called a factor, and research studies with two factors are called factorial designs or simply two-factor designs.

The logic of the Repeated-Measures ANOVA

Logically, any differences that are found between treatments can be explained by only two factors: 1.Systematic differences caused by the treatments 2.Random, unsystematic differences •The denominator reflects how much difference (or variance) is reasonable to expect from random and unsystematic factors.

The F-Ratio for Repeated-Measures ANOVA

The F-ratio for the repeated-measures ANOVA has the same structure that was used for the independent-measures ANOVA. -We're comparing what we actually found with the amount of difference that would be expected if there were no treatment effect.

Degrees of Freedom

The degrees of freedom for the independent-measures t statistic are determined by the df values for the two separate samples: df for the t statistic = (n1 - 1) + (n2 - 1) = n1 + n2 - 2

Independence of Main Effects and Interactions

The two-factor ANOVA consists of three hypothesis tests, each evaluating specific mean differences. -These are three separate tests, but they are also independent of each other. •The outcome for any one of the three tests is totally unrelated to the outcome for either of the other two. •Thus, it is possible for data from a two-factor study to display any possible combination of significant and/or not significant main effects and interactions.

For ANOVA, we want to compare

differences among two or more sample means. -With more than two samples, the concept of "difference between sample means" becomes difficult to define or measure. •The solution to this problem is to use variance to define and measure the size of the differences among the sample means.

With these simple changes, the t formula for the repeated-measures design becomes:

t = MD - mD / sMD -In this formula, the estimated standard error, sMD, is computed in exactly the same way as it is computed for the single-sample t statistic. •The first step is to compute the variance (or the standard deviation) for the sample of D scores. - s^2 = SS / n-1 - The estimated standard error is then computed using the sample variance and the sample size, n - sMD = sq. root of s^2 / n

When the effect of one factor depends on the different levels of a second factor

then there is an interaction between the factors.

For ANOVA, the simplest and most direct way to measure effect size is

to compute the percentage of variance accounted for by the treatment conditions. -The calculation and the concept of the percentage of variance is extremely straightforward. •Specifically, we determine how much of the total SS is accounted for by the SSbetween treatments = SSbetween treatments / SStotal

Repeated Measures and Matched-Subjects Designs

•Because the scores in one set are directly related, one-to-one, with the scores in the second set, the two research designs are statistically equivalent and share the common name related-samples designs (or correlated-samples designs).

Confidence Intervals

•For the independent- measures t, we use a sample mean difference, M1 − M2, to estimate the population mean difference, µ1 − µ2. -The first step is to solve the t equation for the unknown parameter. For the independent-measures t statistic, we obtain u1 - u2 = M1 - M2 + ts (M1-M2)

F-Ratio: Test Statistic For ANOVA

•For the independent-measures ANOVA, the F-ratio has the following structure: F-ratio = variance between treatments / variance within treatments Total Variability: 1. Between-treatments variance: Measures differences caused by: a) systematic treatment effects. b) random, unsystematic factors 2. Within-treatments variance: Measures differences caused by: a) random, unsystematic factors

Hypothesis Testing and Effect Size with the Repeated-Measures ANOVA

•In the first stage of repeated-measures ANOVA, the total variance is partitioned into two components: between-treatments variance and within-treatments variance. •In the second stage, we begin with the variance within treatments and then measure and subtract out the between-subject variance, which measures the size of the individual differences. •The remaining variance, often called the residual variance, or error variance, provides a measure of how much variance is reasonable to expect after the treatment effects and individual differences have been removed.

Another Assumption...

ANOVAs with repeated measures -Assumption of sphericity, where the variances of the differences between all combinations of related levels are equal •This assumption is commonly violated •Somewhat analogous to homogeneity of variance in a between-subjects ANOVA

The Critical Region for the Independent-Measures Hypothesis Test

Reject H0 if outside the +/- 2.145

Main Effects

The mean differences among the levels of one factor are referred to as the main effect of that factor. •The evaluation of main effects accounts for two of the three hypothesis tests in a two- factor ANOVA. -We state hypotheses concerning the main effect of factor A and the main effect of factor B and then calculate two separate F-ratios to evaluate the hypotheses. -In symbols, H0: μA1 = μA2 H1: μA1 ≠ μA2 H0: μB1 = μB2 H1: μB1 ≠ μB2

The complete formula for the independent-measures t statistic is as follows:

t= sample mean diff - pop. mean difference / est. standard error t= (M1-M2) - (u1-u2) / s(M1-M2)

Assumptions of the Repeated-Measures ANOVA

The basic assumptions for the repeated-measures ANOVA are relatively similar to those required for the independent-measures ANOVA. 1.The observations within each treatment condition must be independent. 2.The population distribution within each treatment must be normal. 3.The variances of the population distributions for each treatment should be equivalent.

The Distribution of F-Ratios

To determine whether we reject the null hypothesis, we have to look at the distribution of F-ratios. •But, you should note two obvious characteristics: 1.F values always are positive numbers because variance is always positive. 2.When H0 is true, the numerator and denominator of the F-ratio are measuring the same variance.

The df associated with SSbetween can be found by

considering how the SS value is obtained. -This SS formula measures the variability for the set of treatments (totals or means). To find dfbetween, simply count the number of treatments and subtract 1. Because the number of treatments is specified by the letter k, the formula for df is: dfbetween = k - 1

The hypotheses for the repeated-measures ANOVA are _____________ as those for the independent-measures.

exactly the same. The null hypothesis states that for the general population there are no mean differences among the treatment conditions being compared. In symbols, H0: μ1 = μ2 = μ3 = ... -The alternative hypothesis states that there are mean differences among the treatment conditions. Rather than specifying exactly which treatments are different, we use a generic version of H1, which simply states that differences exist.

We want to compare the differences we actually found with...

how much difference is reasonable to expect if there is genuinely no treatment effect (i.e., the null hypothesis is true) -As a result, the repeated-measures F-ratio has the following structure: F = treatment effects + random, unsystematic differences / random, unsystematic differences

For a repeated-measures study, the null hypothesis states that...

• the mean difference for the general population is zero. In symbols: H0: μD = 0 •The alternative hypothesis states that there is a treatment effect that causes the scores in one treatment condition to be systematically higher (or lower) than the scores in the other condition. In symbols, H1: µD ≠ 0

The interaction measures the

"extra" mean differences that exist after the main effects for factor A and factor B have been considered. The SS and df values for the interaction are found by subtraction. SSAxB = SSbet treatment ─ SSA ─ SSB dfAxB = dfbet treatment ─ dfA ─ dfB

For a two-factor ANOVA, we compute three separate values for eta squared:

-Main effect of factor A -Main effect of factor B -Interaction of A and B

Assumptions for the Independent-Measures ANOVA

1. Observations within each sample must be independent 2. Populations from which samples are selected must be normal 3. Populations from which samples are selected must have equal variances (homogeneity of variance)

The estimated standard error (M1-M2) can be interpreted in two ways:

1. the standard error is defined as a measure of the standard or average distance between a sample statistic (M1 − M2) and the corresponding population parameter (µ1 − µ2). •So what does that mean? -When the null hypothesis is true, the standard error is measuring how big, on average, the sample mean difference is.

•The hypothesis test with the repeated-measures t statistic follows the same four-step process that we have used for other tests

1.State the hypotheses, and select the alpha level. 2.Locate the critical region. 3.Calculate the t statistic. 4.Make a decision.

Measuring the Effect Size for ANOVA

A significant mean difference simply indicates that the difference observed in the sample data is very unlikely to have occurred just by chance. - Thus, the term significant does not necessarily mean large, it simply means larger than expected by chance. - To provide an indication of how large the effect actually is, it is recommended that researchers report a measure of effect size in addition to the measure of significance.

Interactions

An interaction between two factors occurs whenever the mean differences between individual treatment conditions, or cells, are different from what would be predicted from the overall main effects of the factors. •The null hypothesis is that there is no interaction H0: There is no interaction between factors A and B. The mean differences between treatment conditions are explained by the main effects of the two factors.

Analysis of Variance (ANOVA)

Analysis of variance (ANOVA) is a hypothesis-testing procedure that is used to evaluate mean differences between two or more treatments/conditions (or populations). -The major advantage of ANOVA is that it can be used to compare two or more treatments.

Type 1 Errors and Multiple-Hypothesis Tests

Each time you do a hypothesis test, you select an alpha level that determines the risk of a Type I error. -Often a single experiment requires several hypothesis tests to evaluate all the mean differences. •However, each test has a risk of a Type I error, and the more tests you do, the greater the risk.

The F Distribution Table

For ANOVA, we expect F near 1.00 if H0 is true. -An F-ratio that is much larger than 1.00 is an indication that H0 is not true. In the F distribution, we need to separate those values that are reasonably near 1.00 from the values that are significantly greater than 1.00. -These critical values are presented in an F distribution table.

The Null Hypothesis and the Independent-Measures t Statistic

Goal: evaluate the mean difference between two populations (or between separate conditions) - As always, the null hypothesis states that there is no change, no effect, or no difference - the difference between means is simply u1-u2

Terminology in ANOVA

In analysis of variance, the variable (independent or quasi-independent) that designates the groups being compared is called a factor. -The individual conditions or values that make up a factor are called the levels of the factor. -A study that combines two factors is called a two-factor design or a factorial design.

Post-Hoc Tests

Post hoc tests (or posttests) are additional hypothesis tests that are done after an ANOVA to determine exactly which mean differences are significant and which are not. -In statistical terms, this is called making pairwise comparisons. -The process of conducting pairwise comparisons involves performing a series of separate hypothesis tests -As you do more and more separate tests, the risk of a Type I error accumulates and is called the experimentwise alpha level.

ANOVA notation and formulas

The letter k is used to identify the number of treatment conditions—that is, the number of levels of the factor. - For an independent-measures study, k also specifies the number of separate samples. -The number of scores in each treatment is identified by a lowercase letter n. -The total number of scores in the entire study is specified by a capital letter N. -The sum of the scores (ΣX) for each treatment condition is identified by the capital letter T (for treatment total). -The sum of all the scores in the research study (the grand total) is identified by G.

Measuring Effect Size for the Repeated-Measures Analysis of Variance

The most common method for measuring effect size with ANOVA is to compute the percentage of variance that is explained by the treatment differences. •The formula for computing effect size for a repeated-measures ANOVA is: SSbetween treatments SSbetween treatments η2 = ───────────── = ────────────── SStotal - SSbetween subjects SSbetween treatments + SSerror

Repeated-Measures ANOVA

The repeated-measures ANOVA is used to evaluate mean differences in two general research situations: 1.An experimental study in which the researcher manipulates an independent variable to create two or more treatment conditions, with the same group of individuals tested in all of the conditions. 2.A nonexperimental study in which the same group of individuals is simply observed at two or more different times.

Compared to independent-measures designs...

The structure of the F-ratio is the same -BUT: individual differences are a part of the independent-measures F-ratio and are eliminated from the repeated-measures F-ratio.

Repeated-Measures ANOVA and Repeated-Measures t

The two tests always reach the same conclusion about the null hypothesis. •The basic relationship between the two test statistics is F = t2. •The df value for the t statistic is identical to the df value for the denominator of the F-ratio. •If you square the critical value for the two-tailed t test, you will obtain the critical value for the F-ratio. Again, the basic relationship is F = t2.

Calculating the Estimated Standard Error

To develop the formula for s(M1-M2) we consider three points: 1.We assume error between the sample means and population means 2.The amount of error associated with each sample mean is measured by the estimated standard error of M. -Meaning we can calculate it! 3.For the independent-measures t statistic, we want to know the total amount of error involved in using two estimates (sample means) of the population parameters (population means) a.To do this, if the samples are the same size, we will find the error from each sample separately and then add the two errors together. b.When the samples are of different sized, a pooled or average estimate, that allows the bigger sample to carry more weight in determining the final value, is used.

Tukey's Honestly Significant Difference (HSD) Test

Tukey's test allows you to compute a single value that determines the minimum difference between treatment means that is necessary for significance. -This value, called the honestly significant difference, or HSD, is then used to compare any two treatment conditions. •If the mean difference exceeds Tukey's HSD, you conclude that there is a significant difference between the treatments. •Otherwise, you cannot conclude that the treatments are significantly different.

The Formulas for an Independent-Measures Hypothesis Test

We're using the difference between two sample means to evaluate a hypothesis about the difference between two population means. Thus, the independent- measures t formula is: t= sample mean diff - pop. mean difference / est. standard error t= (M1-M2) - (u1-u2) / s(M1-M2)

To use the table, you must know the:

df values for the F-ratio (numerator and denominator), and you must know the alpha level for the hypothesis test. -It is customary for an F table to have the df values for the numerator of the F-ratio printed across the top of the table. -The df values for the denominator of F are printed in a column on the left-hand side.

Within-subjects difference between two or more experimental conditions are used as an _____________________ measure

individual difference -Fear acquisition -Threat sensitivity -Emotional processing and regulation -Reward sensitivity •Often times, studies are interested in between-group differences using repeated-measures designs -MDD v. no MDD in reward sensitivity -High and low psychopathy in threat sensitivity -Increasing focus in psychological science •NIMH RDoC

The second stage of the analysis involves measuring the

individual differences and then removing them from the denominator of the F-ratio. SSbetween subjects = sigma(P^2/k) - (G^2 / N) SSerror = SSwithin treatments - SSbetween subjects dfbetween subjects = n - 1 dferror = dfwithin treatments - dfbetween subjects

ANOVA is considered a relatively ________ analysis:

robust -Can tolerate violating the homogeneity of variance assumption relatively well •The assumption of homogeneity of variance is an important one. -If a researcher suspects it has been violated, it can be tested by Hartley's F-max test for homogeneity of variance.

The main advantage of a repeated-measures study is

that it uses exactly the same individuals in all treatment conditions -There is no risk that the participants in one treatment are substantially different from the participants in another. •Your 'comparison' group is the same as your 'experimental' group

The within-treatments variability measures

the magnitude of the differences within each treatment condition (cell) and provides a measure of error variance, that is, unexplained, unpredicted differences due to error. •All three F-ratios use the same denominator, MSwithin

The concept of an interaction can also be defined in terms of

the pattern displayed in the graph. •When the results of a two-factor study are presented in a graph, the existence of nonparallel lines (lines that cross or converge) indicates an interaction between the two factors.

Repeated-Measures Design

•A repeated-measures design, or a within-subject design, is one in which the dependent variable is measured two or more times for each individual in a single sample. -The same group of subjects is used in all of the treatment conditions. •Affective neuroscience (e.g., passive picture viewing task) •Cognitive tasks (e.g., Stroop)

Effect Size and Confidence Intervals for the Repeated-Measures t

•The most commonly used measures of effect size are Cohen's d and r2, the percentage of variance accounted for. Estimated D = MD / s •The size of the treatment effect also can be described with a confidence interval estimating the population mean difference, µD. mD = MD + tsMd

The Null Hypothesis

•The null hypothesis for the independent-measures test: H0: µ1 − µ2 = 0 •The alternative hypothesis: H1: µ1 − µ2 ≠ 0

Advantages of the Repeated-Measures Design

•The primary advantages of the repeated-measures ANOVA is the elimination of variability caused by individual differences. -In statistical terms, a repeated-measures test has more power than an independent-measures test; that is, it is more likely to detect a real treatment effect.

Assumptions of the Related-Samples t Test

•The related-samples t statistic requires two basic assumptions. 1.The observations within each treatment condition must be independent. - Notice that the assumption of independence refers to the scores within each treatment. 2.The population distribution of difference scores (D values) must be normal.

The role of Sample Variance and Sample Size in the Independent-Measures t Test

•Two factors that play important roles in the outcomes of hypothesis tests are the variability of the scores and the size of the samples. -Both factors influence the magnitude of the estimated standard error in the denominator of the t statistic. -The standard error is directly related to sample variance so that larger variance leads to larger error. •As a result, larger variance produces a smaller value for the t statistic (closer to zero) and reduces the likelihood of finding a significant result. -By contrast, the standard error is inversely related to sample size (larger size leads to smaller error). •Thus, a larger sample produces a larger value for the t statistic (farther from zero) and increases the likelihood of rejecting H0.

For the independent-measures t formula, the standard error measures the amount of error that is expected when...

•you use a sample mean difference (M1 − M2) to represent a population mean difference (µ1 − µ2). •The standard error for the sample mean difference is represented by the symbol s(M1-M2).

Hartley's F-max test provides a formal method for testing this assumption

-The F-max test is based on the principle that a sample variance provides an unbiased estimate of the population variance. -Null hypothesis: population variances are equal, therefore, the sample variances should be very similar.

Difference Scores: The Data for a Repeated-Measures Study

-The difference score for each individual is computed by: difference score = D = X2 - X1 Where X1 is the person's score in the first treatment and X2 is the score in the second treatment.

With these two factors in mind, we can sketch the distribution of F-ratios:

-The distribution is cut off at zero (all positive values), piles up around 1.00, and then tapers off to the right. -The exact shape of the F distribution depends on the degrees of freedom for the two variances in the F-ratio.

ANOVA Notation and Formulas

-The letter k is used to identify the number of treatment conditions—that is, the number of levels of the factor. -The number of scores in each treatment is identified by a lowercase letter n. -The total number of scores in the entire study is specified by a capital letter N. -The sum of the scores (ΣX) for each treatment condition is identified by the capital letter T (for treatment total). -The sum of all the scores in the research study (the grand total) is identified by G.

Other points to consider in comparing the t statistic to the F-ratio:

-You will be testing the same hypotheses whether you choose a t test or an ANOVA. H0: μ1 = μ2 H1: μ1 ≠ μ2 -The degrees of freedom for the t statistic and the df for the denominator of the F-ratio (dfwithin) are identical. -The distribution of t and the distribution of F-ratios match perfectly if you take into consideration the relationship F = t2.

The research designs that are used to obtain the 2 sets of data can be classified in two general categories:

1. The two sets of data could come from two completely separate groups of participants (between-subjects design) - ex: CBT + Mindfulness vs. CBT; shock w/ error vs. no shock w/ error 2. The two sets of data could come from the same group of participants (within-subjects design) -ex: pre and post-therapy symptoms

To find the df associated with SStotal:

1. you must first recall that this SS value measures variability for the entire set of N scores. Therefore, the df value is dftotal = N - 1 2.To find the df associated with SSwithin, we must look at how this SS value is computed. -Remember, we first find SS inside of each of the treatments and then add these values together. -Each of the treatment SS values measures variability for the n scores in the treatment, so each SS has df = n - 1. When all these individual treatment values are added together, we obtain: dfwithin = Σ(n - 1) = Σdfin each treatment

The "safety factor" for the Scheffé test comes from the following two considerations:

1.The Scheffé test uses the value of k from the original experiment to compute df between treatments. - Thus, df for the numerator of the F-ratio is k - 1. 2. The critical value is the same as was used to evaluate the F-ratio from the overall ANOVA.

The primary advantage of a repeated-measures design is that...

it reduces or eliminates problems caused by individual differences. -Individual differences are characteristics such as age, IQ, gender, and personality that vary from one individual to another. -These individual differences can influence the scores obtained in a research study, and they can affect the outcome of a hypothesis test.

The second stage of the analysis separates

the between-treatments variability into the three components that will form the numerators for the three F-ratios: -Variance due to factor A -Variance due to factor B -Variance due to the interaction. -Each of the three variances (MS) measures the differences for a specific set of sample means. •The main effect for factor A, for example, will measure the mean differences between rows of the data matrix.

For ANOVA, the denominator of the F-ratio is called:

the error term.

The actual formulas for each SS and df are based on

the sample totals (rather than the means) and all have the same structure -For factor A, the totals are the row totals and the df equals the number of rows minus 1. For factor B, the totals are the column totals and the df equals the number of columns minus 1

The within-treatments sum of squares is

the sum of all of the SSs within each of the three treatment conditions SSwithin treatments = sigmaSSinside each treatment •The between-treatments sum of squares is given by: SSbetween = SStotal - SSwithin

Effect Size and Confidence Intervals for the Independent-Measures t

•Compute the test statistic. •Make a decision. -If the t statistic ratio indicates that the obtained difference between sample means (numerator) is substantially greater than the difference expected by chance (denominator), •... we reject H0 and conclude that there is a real mean difference between the two populations or treatments.

Analysis of Sum of Squares

•First, compute a total sum of squares and then partition this value into two components: between treatments and within treatments. -As the name implies, SStotal is the sum of squares for the entire set of N scores. -It is usually easiest to calculate SStotal using the computational formula: SStotal =sigmaX^2 - (sigmaX)^2 / N

Calculation of Variances (MS) and the F-Ratio

•In ANOVA, it is customary to use the term mean square, or simply MS, in place of the term variance. -For the final F-ratio, we will need an MS (variance) between treatments for the numerator and an MS (variance) within treatments for the denominator. In each case: MS(variance) = s^2 = SS / df MSbetween = s^2between = SSbetween / dfbetween MSwithin = s^2within = SSwithin / dfwithin

Matched-Subjects Design

•In a matched-subjects study, each individual in one sample is matched with an individual in the other sample. -Age, Intelligence, SES, Psychopathology, etc.

Hypothesis Tests for the Repeated-Measures Design

•In a repeated-measures study, each individual is measured in two different treatment conditions and we are interested in whether there is a systematic difference between the scores in the first treatment condition and the scores in the second treatment condition. -A difference score is computed for each person. -The hypothesis test uses the difference scores from the sample to evaluate the overall mean difference, µD, for the entire population.

Analysis of Degrees of Freedom (df)

•In computing the degrees of freedom, there are two important considerations to keep in mind: 1.Each df value is associated with a specific SS value. 2.Normally, the value of df is obtained by counting the number of items that were used to calculate SS and then subtracting 1. For example, if you compute SS for a set of n scores, then df = n - 1.

Within-Treatments Variance

•Inside each treatment condition, we have a set of individuals who receive the same treatment. -The researcher does not do anything that would cause these individuals to have different scores, yet they usually do have different scores. •The differences represent random and unsystematic differences that occur when there are no treatment effects. -Thus, the within-treatments variance provides a measure of how big the differences are when H0 is true.

Measuring Effect Size for the Independent-Measures t

•One technique for measuring effect size is Cohen's d, which produces a standardized measure of mean difference.

Between-Treatments Variance

•The between-treatments variance simply measures how much difference exists between the treatment conditions. -There are two possible explanations for these between-treatment differences: 1.The differences are the result of sampling error. 2.The differences between treatments have been caused by the treatment effects.

Hypothesis Test with the Independent-Measures t Statistic

•The independent-measures t statistic uses the data from two separate samples to help decide whether there is a significant mean difference between two populations (or between two treatment conditions). 1.State the hypotheses and select the alpha level. 2.Compute the df for an independent-measures design. 3.Obtain the data and compute the test statistic. 4.Make a decision.

Time-Related Factors and Order Effects

•The primary disadvantage of a repeated-measures design is that the structure of the design allows for factors other than the treatment effect to cause a participant's score to change from one treatment to the next. -Specifically, in a repeated-measures design, each individual is measured in two different treatment conditions, often at two different times.

Hypotheses for a Related-Samples Test

•The researcher's goal is to use the sample of difference scores to answer questions about the general population. -The researcher would like to know whether there is any difference between the two treatment conditions for the general population.

The t Statistic for a Repeated-Measures Research Design

•The single sample t-statistic formula will be used to develop the repeated-measures t test. t = M - m / sM -The sample mean, M, is calculated from the data, and the value for the population mean, µ, is obtained from the null hypothesis. -The estimated standard error, sM, is calculated from the data and provides a measure of how much difference can be expected between a sample mean and the population mean.

t Statistic for a Repeated-Measures Research Design

•The t statistic for a repeated-measures design is structurally similar to the other t statistics we have examined. -The major distinction of the related-samples t is that it is based on difference scores rather than raw scores (X values).

The Test Statistic for ANOVA

•The test statistic for ANOVA is very similar to the t statistics used in earlier chapters. -For the t statistic, we first computed the standard error, which measures the difference between two sample means that is reasonable to expect if there is no treatment effect (that is, if H0 is true). -The test statistic for ANOVA uses this fact to compute an F-ratio with the following structure: F = variance between sample means / variance expected with no treatment effect *•The analysis process divides the total variability into two basic components. •Between-treatments variance •Within-treatment variance

For this reason, researchers often make a distinction between the testwise alpha level and the experimentwise alpha level.

•The testwise alpha level is the risk of a Type I error, or alpha level, for an individual hypothesis test. •When an experiment involves several different hypothesis tests, the experimentwise alpha level is the total probability of a Type I error that is accumulated from all of the individual tests in the experiment.

Homogeneity of Variance

•The third assumption is referred to as homogeneity of variance and states that the two populations being compared must have the same variance. -This is particularly important when you have large sample size differences!

Two-Factor ANOVA and Effect Size

•The two-factor ANOVA is composed of three distinct hypothesis tests: 1.The main effect of factor A (often called the A-effect). 2.The main effect of factor B (called the B-effect). 3.The interaction (called the A × B interaction). All three F-ratios have the same basic structure: variance (differences) b/w treatments / variance (differences) expected if there are no treatment effects

Assumptions of the Two-Factor ANOVA

•The validity of this ANOVA depends on the same three assumptions we have encountered with other designs: -The observations within each sample must be independent. -The populations from which the samples are selected must be normal. -The populations from which the samples are selected must have equal variances (homogeneity of variance).

The Relationship between ANOVA and t Tests

•When you are evaluating the mean difference from an independent-measures study comparing only two treatments (two separate samples), you can use either an independent-measures t test or the ANOVA. -The basic relationship between t statistics and F-ratios can be stated in an equation: F = t2

One way to deal with time-related factors and order effects is to...

•counterbalance the order of presentation of treatments. -That is, the participants are randomly divided into two groups, with one group receiving treatment 1 followed by treatment 2, and the other group receiving treatment 2 followed by treatment 1. The goal of counterbalancing is to distribute any outside effects evenly over the two treatments

For the repeated-measures design, the sample data are...

•difference scores and are identified by the letter D, rather than X. -The population mean that is of interest to us is the population mean difference (the mean amount of change for the entire population), and we identify this parameter with the symbol µD.

Stage 1 of the repeated-measures analysis is _________ to the independent-measures ANOVA

•identical SStotal = (sigma)X^2 - G^2 / N dftotal = N - 1 SSwithin treatments = (sigma)SSinside each treatment dfwithin treatments = (sigma)dfinside each treatment SSbetween treatments = sigma(T^2 / n) - (G^2 / N) dfbetween treatments = k - 1


Set pelajaran terkait

Theatre History Medieval to Elizabethan

View Set

Quiz 9 Information Security Fundamentals

View Set

Ch 9 Taxation of International Trans

View Set

FTM 310 Final Exam (Miscellaneous Previous Questions)

View Set

AS 356, basically the whole class

View Set

Pharmaceutical Tech Midterm Review 2019/2020

View Set