Test 3

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

factorial ANOVA

-2+ nominal IV, each with 2+ levels -scale DV -between-groups, repeated-measures, or mixed design

one-way repeated-measures ANOVA: SStotal

-an index of the overall variability around the grand mean -it does not tell you what caused the variability

F is also a family of distributions

-as values get farther and farther from the expected value, they are less and less likely to occur -identified by two degrees of freedom measures: 1. one df is associated with the numerator (number of samples/conditions) 2. the other df is associated with the denominator (sample sizes) -F=1 is the expected value under the null in ANOVA (this is different than before) -there are no more one-tailed vs. two-tailed tests (it only goes 1 and up!)

two-way between-groups ANOVA:

-compare each Fstat to the critical F -if the Fstat is greater than Fcrit, they are significant and you reject it -you will reject or fail to reject each of the main effects and the interaction separately, and there can be any combination of them

assumption of homoscedasticity

-means that the samples in the different groups come from populations with similar variances

type 11 error

-miss-failing to reject the null hypothesis when the null hypothesis is false-saying that nothing happened when it did-we don't want a miss in research, especially of important things -type 2 error rate: 1-power

logic of Analysis of Variance (ANOVA)

-null hypothesis: ANOVA starts with the assumption that all groups (aka treatments; conditions) will yield the same outcome -alternative hypothesis: after data collection, one could end up with the claim that all groups do not produce the same outcome

F

-test statistic for ANOVA

two-way between-groups ANOVA: set up hypotheses

-there will be a null and alternative hypothesis for the 3 effects: 2 main effects and the interaction 1. main effect for IV-A: -null: the DV is not affected by IV-A: mean of one level of the IV-A = mean of the other level of the IV-A -alternative: the DV is affected by IV-A: mean of one level of the IV-A does not equal the mean of the other level 2. main effect for IV-B (same process as above) 3. interaction: -null (not easily formulated as symbols): the effect of IV-A is not dependent on IV-B -alternative (not easily formulated as symbols): the effect of IV-A depends on IV-B

JASPR activity: why did we do post-hoc Turkey tests instead of standard independent samples t-tests?

-to avoid inflating the type 1 error rate, which would happen if we simply conducted multiple t-tests

why use an ANOVA instead of a bunch of t-tests?

-used to keep your type 1 error rate low -running many t-tests on the same data set increases the probability of the type 1 error (ex. if you run 6 t-tests, 26% of the time we would falsely claim there is an effect that isn't there if there is truly no difference, aka falsely reject the null hypothesis) -ANOVA is designed to assess all differences at once to keep the type 1 error down

t-tests

-used when comparing two sets of scores

statistics

values calculated from a sample, such as sample mean

one-way repeated-measures ANOVA: calculate F

-F is still MSbetween/MSwithin

quantitative interaction

-one IV exhibits a strengthening or weakening of its effect at one or more levels of the other IV, but the direction of the initial effect does not change

sampling distribution recap

-the standard normal z-distribution: -we can convert any set of scores into a standardized set of values -symmetrical -bell-shaped -mean=0 -known probabilities -T-distributions is a family of distributions: shape of distribution depends on the number of observations (degrees of freedom)

If a researcher reports the following finding, how many post hoc tests should they do? There was a significant effect of instruction condition on accuracy, F (3, 67) = 4.33, p < .05.

-this means there was 4 groups since the first df in parentheses indicates #groups -1, (4-1=3), so they should run 6 post hoc tests

A researcher designs an experiment in which the single independent variable has five levels. If the researcher performed an ANOVA and rejected the null hypothesis, how many post hoc comparisons would he make (assuming he was making all possible comparisons)?

- 10

If a researcher reports the following finding, how many post hoc tests should they do? There was no statistically significant effect of physical crowding on aggression, F (2, 29) = 1.82, p > .05.

-0 -only do post hoc tests if omnibus ANOVA is significant

If you know a researcher used a between-groups ANOVA to analyze her data, what is the minimum number of groups in her study?

-3

how many F statistics do we examine and compare to critical values in a two-way ANOVA?

-3

ANOVA nomenclature

-ANOVA is preceded by two adjectives that indicate: 1. number of independent variables: "one way" or "two way" 2. research design: "between groups" "repeated-measures" or "within-groups"

one-way between-groups ANOVA: calculating F

-F = MSbetween/MSwithin -MSbetween: SSbetween/dfbetween -MSwithin: SSwithin/dfwithin

one-way between-groups ANOVA

-F=between groups variance (MSb) / within groups variance (MSw) -between-groups variance is aka "variance of the means" -within-groups variance is aka "mean of the variances" -MSb and MSw are two estimates of the population variance -when the null is true, MSb=MSw and thus F=1 (so F=1 under the null in ANOVA) -when the null is false, MSb will be larger than MSw and thus F>1 -Fstat gets larger with bigger group differences and/or smaller differences within groups

planned comparison test

-IF before data collection, we had reasons to expect a difference between 2-week and 8-week (and for some reason we're not much interested in the other pairwise comparisons) -run a follow up t-test using original alpha -the thing is that you can't go back after the fact and act like you only wanted to compare the two

NHST

-NHST always begins with a claim about a parameter, such as a population mean: µ0-µ0 is the known population mean-µ1 is the new unknown population that you are using sample X ̅ to represent-two possible outcomes: H0: µ0 = µ1 and H1: µ0 ≠ µ1

type 1 error

-a false alarm-you rejected the null hypothesis when the null hypothesis was true-saying something happened when it didn't-type 1 error is set by alpha (if we have an alpha of .05, then we'd be wrong 5% of the time)

two-way mixed factorial ANOVA

-applies to an experiment with one within-groups IV and one between-groups IV.

effect size for overall ANOVA

-can get an effect size for overall IV in the ANOVA -you can get an effect size for overall IV in the ANOVA: there are several options including N2 (eta squared), R2 (R squared), and W2 (omega squared) -they are basically all saying the same thing: how much of the variability in the DV can be accounted for by the IV (some variation of SS between / SS total) -guidelines for small, medium, and large are different than for Cohen's d

two-way between-groups ANOVA: identify/define sampling distribution

-dfRows (the left IV in the table) = Nrows - 1 = 2 - 1 -dfColumns (the top IV in the table) = Ncolumns - 1 = 2 - 1 -dfInteraction = (dfRows)(dfColumns) -dfWithin = add up the df for each of the four conditions when each condition is the number of participants in the condition - 1 -dfTotal = total number of participants - 1

one-way repeated-measures ANOVA: identify/define sampling distribution

-dfbetween = # of groups - 1 -dfsubjects = number of participants - 1 -dfwithin = (dfbtwn) * (df subjects) -dftotal = add up all 3 of the other ones or take total number of observations -1 `

statistically significant

-difference so large that chance is not a plausible explanation for the difference

interaction

-interaction: when the effect of one IV on the DV changes across levels of the other IV -ex: "is the effect of the top IV different with the upper level of the left IV compared to the lower level of the left IV?" -start with simple effects (do NOT use marginal means) -then compare simple effects Steps in table: 1. compare the difference in the top row with the difference in the lower row 2. if there is any difference between the two differences, than you have an interaction (one possibility is that there's a crossover, so the top row is the opposite of the bottom row) Steps in the graph: 1. compare the difference between the two bars on the left with the difference between the two bars on the right -note: again, this depends on what variable is on the x-axis. if they put the left variable on the x-axis, then the above is correct. however, if they put the top variable on the x-axis, then you would be comparing the difference in one-colored bars to the other-colored bars

one-way between-groups ANOVA: make a conclusion

-reject the null if Fstat (calculated) > Fcrit -"there is a significant difference between hospitals in terms of wait time, F(dfbetween, dfwithin) = Fstat, p<.05

when is the mean larger than the median: in a positively skewed or negatively skewed distribution?

-the mean is greater than the median in a positively skewed distribution

mean squares

-what we call variance in an ANOVA

interpreting the (visual) effects using a factorial design

-you can have ANY combo of main effects and interactions -when there is a significant interaction, the main effects don't tell the whole story (and main effects can even be misleading when there is a significant interaction): would follow up with tests of simple effects, to characterize the interaction since ANOVA doesn't actually give us answers about each simple effect -the interaction typically overrides main effects -IF there is an interaction, we pay attention to the interaction and to the simple effects -IF there is NOT an interaction, we pay most attention to the main effects note: in practice, to know if there really are main effects or an interaction, we need to run ANOVA

one-way between-groups ANOVA: choose the sampling distribution

-you need to use degrees of freedom to decide what type of F-distribution it is 1. df between = Ngroups - 1 (the number of levels of the IV - 1) 2. df within = total number of N (participants) - number of groups (another way of doing it is df1+df2+df3, where each is N-1) 3. df total = df btw + df within (a check on it is Ntotal - 1) -then use the Table to determine the critical value

two-way between-groups ANOVA: calculate F

-you will have 3 Fstats: two main effects and one interaction

what a single person's z score indicates (percentile)

-z-score: how many standard deviations away is this value from the mean of the sampling distribution its drawn from?

looking at it from a different angle: does the effect of IV-B depend on the different levels of IV-A?

1. Simple effect: is there an effect of the left IV, for each level of the top IV? -table: compare the inside means of the left column for the first simple effect, and compare the the inside means of the right column for the second simple effect -graph: compare the two grey bars for the first simple effect, and compare the two white bars for the second simple effect 2. Interaction: -table: compare the difference b/w the left column to the difference b/w the right column -graph: compare the difference b/w the grey bars to the difference b/w the white bars

post-hoc tests

1. between-groups design: -can use Turkey tests, Bonferroni, others 2. within-groups design: -Tukey is not designed for within-groups designs -can use Bonferroni, though -there are many versions of post-hoc tests -some are more conservative than others; in other words, corrections intended to avoid type 1 errors can make it very difficult to reject the null -remember that the whole point of ANOVA is that running a bunch of t-tests will lead to a higher type 1 error rate -each of these post-hoc tests is better or worse for correcting the problem of type 1 error rate

two-way ANOVAs answer three questions

1. is there an effect of the first IV? (main effect) 2. is there an effect of the second IV? (main effect) 3. is there an interaction between the IVs?

summary of types of tests and when to use them

1. t-test: -nominal IV, 2 levels -scale DV -independent or paired samples 2. one-way ANOVA -nominal IV, 3+ levels -scale DV -between-groups or repeated-measures 3. Factorial ANOVA -2+ nominal IV, each with 2+ levels -scale DV -between-groups, repeated-measures, or mixed design

one-way repeated-measures ANOVA (aka a within-groups ANOVA)

-a hypothesis test used when you have: -one nominal IV with at least 3 levels -a scale DV -within-subject design (same people in each group, this is the distinguishing factor) -also called a one-way within-groups ANOVA -a major advantage over one-way between-groups ANOVA: repeated-measures allows us to account for one more source of variance: variance due to participants (subjects)

ANOVA

-a hypothesis test used when you have: 1. a scale DV 2. one categorical IV with at least 3 levels (could be with a between-groups design or a within-groups design) -helps to control type 1 error rate by allowing us to do just one overall test (also helps us control type 1 error rate in another way, by making our comparisons more conservative -ANOVA does NOT work with nominal or ordinal variables -it's very similar to a t-test except there are more than two groups

p-value

-a p-value is the probability of the obtained statistical test value (or one more extreme) when the null hypothesis is true (when p is lower than alpha, reject the null) -low probabilities indicate the null is probably not true -p<alpha is shorthand for whether we reject the null hypothesis -p<alpha of 0.05 means that there is less than a 5% probability of getting the results obtained (or results more extreme) if the null hypothesis is true (occurring fewer than 5 times in 100) -thus, p<.05 is synonymous with "rejected the null hypothesis" -0>.05 means that there's greater than a 5% probability of getting the results obtained (or results more extreme) if the null hypothesis is true:-occurring more than 5 times in 100 -thus, p>.05 is synonymous with "retained the null hypothesis" (failed to reject the null)-with stats programs, you can get exact p-values

one-way repeated-measures ANOVA: SSbetween

-about the amount of variability between conditions -how much of an effect there is for the IV -measured as variability of conditions around the grand mean -same for both types of ANOVA

within-group variability (error variability)

-amount of variability there is around each individual condition mean -if there is not a lot of variability around each individual condition mean, SS-witihin is small -greater variability around each condition mean increases SS-within

between-group variability

-amount of variability there is around the overall mean (the grand mean =sample mean - grand mean -the more they deviate from the Grand Mean, the larger the effect of between-groups variance

one-way repeated-measures ANOVA: SSwithin

-an index of the variability around each condition mean (an index of the variability of individual scores within a condition around each condition mean) -ex. how far does each person's score in the 2-week condition deviate from the average in the 2-week condition -but, in a repeated-measures design, subjects variability is already accounted for, so that part is removed from this SS -rearrange the SStotal formula to solve for SSwithin

one-way repeated-measures ANOVA: SSsubjects

-an index of the variability of each subject's overall performance around the grand mean (if each person's overall average is closer to the grand mean, it would be smaller) -the new one that gives us additional leverage in repeated-measures

ANOVA key points

-are the 3(+) sample means more spread out than we'd expect due to chance (under the null)? -is there more variance between the different groups than within the groups? -F= (between-groups variance / within-groups variance) -null hypothesis: all means are equal: mean1=mean2=mean3=etc. -alternative hypothesis: at least one mean differs from the others -another way to ask the question is if there is more variance between the different groups than within the groups

rejection region

-area of a sampling distribution that corresponds to test static values that lead to rejection of the null hyptohesis

two-way between-groups ANOVA: fill out the source table and determine Fcrit

-because the df for between and within are the same for all three tests, we can use the same Fcrit -if we had a different dfBetween for factor A, B, and interation, we would need to calculate Fcrit for each one

two-way between-groups ANOVA

-both IVs are manipulated between-groups

two-way repeated-measures ANOVA

-both IVs are manipulated within-groups

similarity among statistical tests

-differences (measure of differences btwn conditions) /variability (error, or measure of variability within conditions) -t-stat: diff btwn means/standard error -Fstat: between-groups variance / within-groups variance -if the ratio is larger, there is a greater likelihood that there truly is an effect/difference -side note: "groups" means "treatments/conditions" -a small numerator of the F-stat means the variability between groups is not very large

independent-samples t-test

-different people measured on two levels of an IV

effect size

-effect size is a standardized measure of magnitude of an effect the IV has on the DV -you can calculate a d value for any comparison between 2 groups (with t-tests, we had Cohen's d -

what is the additional source of variance in a repeated-measures ANOVA?

-subjects

one-way repeated-measures ANOVA: effect size

-effect size is a standardized measure of magnitude of an effect the IV has on the DV (asks how much of the variability in the DV is due to the manipulation) -there are a variety of effect size measures for ANOVA -there are a variety of effect sizes for ANOVA: Eta squared is often used in software programs, R squared is easy to calculate by hand, and Omega squared -ex. R^2=.71 = about 71% of the variability in the outcome is due to our manipulation of the IV: this is a huge effect (think about all of the things that could have influenced memory) -however, it is a bit wrong in this case: Eta squared gives you something similar, but omega squared is much much lower -in this case, omega-squared is a much more conservative (and arguably more accurate) measure of effect size -Eta-squared and R-squared greatly overestimate the true effect with small sample sizes -Omega q squared has something in its calculation that helps adjust for sample size -when there's a large sample size, all 3 perform about the same

synonyms for factorial ANOVA

-factor = IV -"levels" of an IV = "conditions" of an IV

partitioning of variance

-how variance is split in different approaches -think about all the variability that could happen on a DV 1. between-groups ANOVA -variance is split among MSbetween (the effect) and MSwithin (the error/residual, and a side note is that we don't always know the cause of that amount of variability) 2. repeated-measures -some of the MSwithin becomes taken over by variability of subjects -this decreases MSwithin (by accounting for subject variance), decreasing the denominator, and increasing Fstat, which makes it more likely to reject the null -therefore, repeated-measures ANOVA often makes it easier to reject the null/find an effect

degrees of freedom

-how we identify the F distribution

one-way repeated-measures ANOVA: make a decision

-if Fstat > Fcrit, reject the null and conclude that the three samples do not come from populations with identical characteristics -Ftest(dfbetween, dfwithin) = Fstat, p<.05

how an increase or decrease in df affects ability to reject null hypothesis in t-tests

-increasing sample size decreases standard error -if sample size is large, sampling distribution approaches normal (central limit theorem) -the exact shape of a t distribution depends on the sample size: with smaller samples, a greater proportion of the distribution is contained in the tails (more of the distribution is in the tails, which makes it more difficult to reject the null) -as sample size gets larger, CI gets narrower, t critical value gets smaller, standard error gets smaller -t is strongly affected by N: larger N = larger t -statistical significance shifts with sample size (larger samples allow us to find even small effects, which can be both good and bad)-effect size does not much change with sample size (use it to help interpret NHST decision)

effect size reviewed

-indicates the size of a difference (between two means)-unaffected (or mildly affected) by sample size (so it can be a more stable measurement)-a starting point for effect size is just the raw difference between means, but Cohen's d does better than that-Cohen's d is a commonly-used measure of effect size -effect size d = diff b/w means / deviations in population SD units -can be an indicator of practical importance -does not change much with sample size

the interaction is the key in factorial ANOVA

-investigating an interaction (and what it tells us) is the primary reason for running a factorial experiment, so the interaction is the primary thing to focus on -but, we also get information about main effects

two-way between-groups ANOVA formulas for the Fstat

-main effects: 1. does factor A by itself have an overall effect on the DV? -F = MSfactorA / MS within 2. does factor B by itself have an overall effect on the DV? -F = MSfactorB / MS within -interaction: 3. does the effect of one factor vary depending on the other factor? (aka do the two IVs combined have an effect on the DV that we wouldn't see if we were just looking at each factor by itself? --F = MSfactorA*factorB / MSwithin -note that A*B does not mean "A multiplied by B." it is just a table for the interaction term -recall that an interaction is unique to a factorial design

one-way repeated-measures ANOVA: null and alternative hypothesis

-null: no differences between population means: mean1 = mean2 = mean 3 -alternative: at least one population mean is different from at least one other population mean

one-way between-groups ANOVA: set the null and alternative hypothesis

-null: no differences between populations means (mean1=mean2=mean3) -alternative: at least one population mean is different from the mean of all of them combined (the grand mean) -don't use symbols to express the alternative -the alternative basically means "at least one mean is different from other means," so it does not mean that mean1 is not equal to mean2 is not equal to mean3 (b/c just one mean needs to be different from the grand mean) -tentatively assume the null is true

confidence intervals

-provide a range of plausible values for the population parameter -confidence intervals are a type of interval estimate-they include the population parameter a certain percentage of the time (90%, 95%, 99%) -compared to a point estimate, a confidence interval allows you to infer more about the mean of an unmeasured population -think about CI as a margin of error around a point estimate (margin of error: critical value * standard error) -CIs give us a sense of the precision of our point estimate, so narrower is better!-formula: point estimate +- critical value * standard error

review of partitioning of variance for one-way between-groups ANOVA

-recall that for one-way between-groups ANOVA, variance is split among variance-between (due to the manipulation) and variance-within (whatever variance is left due to natural differences)

Bonferroni (manual) correction

-run separate t-tests for each comparison -to get the new alpha, divide alpha by the number of tests you are running (ex. .05/3) -the p-value has to be less than the new alpha to reject the null -adjusts for otherwise inflated type 1 error rate -in this example, none of the p-values are less than the new alpha, so what happened?: the Bonferroni correction is a strict (very conservative) correction, so it will sometimes miss an effect that is truly there -in other words, it lowers power to detect an effect that is really there

paired-samples t-test

-same people measured on two levels of an IV (or natural pairs or matched pairs designs)

one-sample t-test

-sample mean compared to known population -ex. career satisfaction for sample of army nurses vs. known population of civilian nurses

main effect

-the effect of one IV on the DV -ex. the effect on mnemonic training on test performance vs. the effect of using a mnemonic on test performance -two IV=two main effects Steps for the table: 1. find the marginal means for the variable on the left side by averaging across the rows and then see if there is any difference between the two 2. find the marginal means for the variable on the top by averaging down the columns and then see if there is any difference between the two Steps for the graph: 1. for the variable on the left, average the two bars associated with one level of it (the bars on the left) and average the two bars associated with the other level (the bars on the right) and then compare the means 2. for the variable on the top, average the two (white) bars representing one level of the IV and average the two (grey) bars representing the other level of the IV, and then comparing the means to see if there's a difference -note: this may depend. this is if the left variable is on the x-axis.

simple effect

-the effect of one IV on the DV, at a specific level of the other IV -ex. "the effect of the top IV, at the upper level of the left IV" or "the effect of the top IV, at the lower level of the left IV" -you will be comparing the means inside the table Steps for the table: 1. for the first level of the left IV, compare the means in the top row for if there is a difference 2. for the second level of the left IV, compare the means in the bottom row for if there is a difference Steps for the graph: 1. for the first level of the left IV, compare the two bars on the left for any difference 2. for the second level of the left IV, compare the two bars on the right for any difference -important note: this may be flipped depending on which variable they put on the x-axis: if they put the left variable on the x-axis, then the above is correct. however, if they put the top variable on the x-axis, then you would be comparing one-colored bars to the other-colored bars

assumption of homogeneity

-the inferential tests we have covered are all called parametric tests -they work best when certain assumptions about underlying populations are met -one key assumption in ANOVA is that variances are equal in the underlying populations being sampled from -homogeneity of variances (aka homoscedasticity): samples come from populations with similar variances -general rule: if largest variance is more than twice the smallest variance, the assumption of homogeneity is violated -if the assumption is met, the ANOVA works as intended and we can have confidence in the conclusion at the end -if the assumption is violated: 1. if the sample size is large, then we can run ANOVA without major concern 2. BUT if the sample size is small, then the results of the ANOVA might be wrong

one-way between-groups ANOVA: multiple comparison tests (post-hoc tests)

-the overall (omnibus) ANOVA does not tell you where the difference is (it just says at least one mean is different from the grand mean) -to see where the difference is, you need to do follow up tests called multiple comparison tests -you ONLY do multiple comparison tests, aka post-hoc tests, if the overall F-test is significant -here, we compare A vs. B, A vs. C, B vs. C -you will get each t-stat for each row -take original alpha we chose and compare it to the p-value for each one: if p-value < alpha, there is a statistically significant effect -there are several options for post-hoc tests

why do we use N-1 when we calculate an estimate of population variance?

-the reason dividing by n-1 corrects the bias is because we are using the sample mean, instead of the population mean, to calculate the variance

different "angles" for interactions

-there are two different ways to express an interaction pattern from any 2 by 2 factorial ANOVA 1. does the effect of IV-A depend on the different levels of IV-B? 2. does the effect of IV-B depend on the different levels of IV-A? -it depends on how you phrase the research question -this does NOT mean there are two separate interactions -rather, these are two different ways to characterize an interaction

which of the following are true of a repeated-measures ANOVA?

-this analysis is often used when participants are measured multiple times -the F ratio denominator gets smaller because of the calculations of the repeated-measures ANOVA -compared to a between-groups ANOVA, using a repeated-measures ANOVA typically increases our likelihood of rejecting the null hypothesis -each of the main effects and interactions is evaluated independently -for a two-way ANOVA, you can use R^2 to calculate effect size of the interaction -in a two-way factorial ANOVA, between-groups variance is divided into one interaction and two main effects

different patterns that could comprise an interaction

-three patterns of interactions are possible 1. an effect is larger at one level than another, but they are in the same direction -ex. mnemonic training example in the first way -we called this a qualitative interaction 2. an effect is present at one level, but not the other (one of the simple effects is null) -ex. mnemonic training example in the second way -so one simple effect is null 3. an effect is reversed at one level compared to the other (the simple effects are in opposite directions) -ex. temperature has opposite effect son comfort level depending on one's home state -also called a qualitative or cross-over interaction

practical use of power

-to determine the number of participants (subjects) required to detect an effect

two-way between-groups ANOVA: identify populations

-two IV=4 populations -comparison distribution: will have 3 F distributions -hypothesis test: two-way between-groups ANOVA

F-table

-use the ANOVA source table to organize your calculations of df -numerator df = between -denominator df = within -light vs. dark values are for different levels of alpha -go to the nearest value (go to the one that is lower b/c that will make the estimate more conservative, so if it's 59, go to 55) -this value gives you the cutoff value

one-way repeated-measures ANOVA: multiple-comparison tests: 2 approaches

-used to figure out where the significant differences are 1. a priori (planned ahead of time) 2. post-hoc (after the fact)

parameters

-values calculated from a population, such as population mean

What does MS reflect, in ANOVA calculations?

-variance

two-way between-groups ANOVA: partitioning of variance

-we take variance-between and split it out for 3 different answers we get: so we have a MSbetween for one IV, the other IV, and the interaction -the three different MSbetween are used for three different R ratios -we'll use the same MSwithin for all three F ratios

various facts

-when comparing three or more groups we use ANOVA because conducting multiple t-tests would result in an increased likelihood of a type 1 error, if the null hypothesis is true -post hoc testing is a statistical procedure frequently carried out after we reject the null hypothesis in an analysis of variance. It allows us to make multiple corrected comparisons among means -we look at the marginal means to examine the main effects in a factorial ANOVA

general steps of NHST with ANOVA

1. Set null and alternative hypotheses: Populations you are comparing may be exactly the same (null hypothesis, H0) orone or more of them may have a different mean (alternative hypothesis, H1). 2. Obtain data from samples to represent the populations you are interested in. 3. Tentatively assume that the null hypothesis is correct. If the populations are all the same, any differences among sample means are the result of chance. 4. Perform operations on the data using the procedures of ANOVA until you have calculated an F value. 5. Choose a sampling distribution that shows the probability of the F value when H0 is true. 6. Compare your calculated F value to the critical value. 7. Come to a conclusion about H0. 8. Tell the story of what the data show. If there are three or more treatments and you reject H0, a conclusion about the relationships of the specific groups requires further data analysis.


Ensembles d'études connexes

MedSurg: Prioritization Ch 15 Comprehensive Ex

View Set

Intro to Organ. Behav. Chapter 9

View Set

Integrated Biology 3401 Stem Cells

View Set

MKTG 3700 Marketing Metrics Quiz 2

View Set