Module 4 Review
Non-Parametric Tests for ANOVAs
Kruskal Wallis Test Dunn's Test Friedman Test
Why can't we do multiple t-tests instead of anova?
Multiple comparisons will cause an increase in the type I error rate. Also anova allows a single test to compare any number of groups or treatments
F statistic
Test stat for ANOVAs Compares avg variability between groups to avg variability w/in groups Does NOT have a normal distribution, not centered on zero Compare test stat to critical table found on F distribution table with between groups df on top and w/in group df on side and alpha F = variance (mean difs) between treatments/variance (mean difs) expected if there is not treatment effect F = MSbetween/MSwithin/error ratio of variance among means divided by the avg variance w/in groups
Post Hoc Tests for ANOVA
Tukey Kramer Test Dunnett's Test Bonferroni Correction Holm Test Scheffe Test Duncan's Test Used after ANOVA H0 is rejected- we know there is a difference, but don't know where and post hocs can tell us All about power vs error All rely on normal distributions and parametric assumptions
Types of Two Way ANOVA
Two Way ANOVA, independent (both nominal variables independent) Two Way ANOVA, dependent (both nominal variables dependent) Two Way ANOVA, mixed (one nominal variable independent, one dependent)
Treatment Sum of Squares
compare the treatment/factor to the overall mean
test flashcard
i don't think things are saving not saving how many can i add
T test test stat vs anova test stat
independent sample t test computes test stat by dividing the difference between the sample means by the standard error of the difference; standard error of the difference is an estimate of the variability within each group (assumed to be same)-- so difference/variability between samples is compared to variability within samples anova is same except that variance is used to measure variability instead of standard deviations
Interaction vs Main effect
main effect- if one group is affected more than the other by the other nominal variable (the factor that caused the biggest difference?) interaction- if the two groups are affected differently by the other nominal variable ??
Contrasts
planned/a priori comparisons specific comparisons between sets of means suggested before data collection (opp of post hoc) orthogonal contrasts may be used to partition the treatment SS into separate components according to # of dfs H0 = the mean for group 1 = the avg of all the other groups means (each mean = contrast coefficient, each had 1 df)- calculation of contrast SS is complex Polynomial contrasts- if treatment levels have natural order and are equally spaced, can test for trend in treatment means w orthogonal contrasts assumes: observations are independent and randomly selected from normal pops w equal variances (does not have to have equal sample sizes)
Between group variation
reflects the difference in means of the groups- on graph, is distance between peaks of the groups
Total Sum of Squares
sum of squares across all treatments/factors
Variance
sum of squares divided by the number of observations computes average square deviation from the mean square root of variance = standard deviation tells how much each observation differs from the mean observations more spread out = farther from mean = larger variance
Total variation
the total variation across the all the data sets- on graph, the entire range from the lowest end of any group to the highest (x axis)
Degrees of freedom
total: # of observations - 1 among/between groups: # of treatments/groups - 1 within groups: # of observations - # of treatments/groups
Factorial ANOVA- 2 independent variables
2 factors w at least 2 levels each levels are independent do 7 steps of anova as usual 3 H0's (see 2 way anova) dfa = a-1 dfb = b-1 dfaxb = (a-1)(b-1) dferror = N-ab dftotal = N - 1 Table: SS df MS F variable a variable b interaction error total Should get 3 F stats (one for each hypothesis)
Factorial ANOVA- 2 dependent variables
2 factors with at least 2 levels each both factors dependent like extension of repeated measures ANOVA 3 H0's (see 2 way anova) do 7 steps of anova as usual dfs- have 4 different error terms because all factors are dependent, so have 8 dfs dfa = a-1 dfaerror = (a-1)(n-1) dfb = b-1 dfberror = (b-1)(n-1) dfaxb = (a-1)(b-1) dfaxberror = (a-1)(b-1)(n-1) dferror = n-1 (for consistent subjects variability, overall error) dftotal = N-1 table SS df MS F variable a error a variable b error b interaction axb error axb error overall total will calculate 3 F stats - one for each H0
Factorial ANOVA- 2 mixed factors
2 factors, at least 2 levels each one factor independent, on dependent like combo of one way and repeated measures anovas do 7 steps of anova as usual 3 H0s (see 2 way anova) dfa = a-1 dfaerror = a (n-1) dfb = (b-1) dfaxb = (a-1)(b-1) dferrorbxs/a = a (b-1)(n-1) dftotal = N - 1 table SS df MS F variable a error a variable b interaction axb error bxs/a total *independent terms have error terms *dependent error term accounts for constant variability between subjects not accounted for w independent terms calculate 3 F stats, one for each H0
ANOVA
Analysis of Variance tests allows us to evaluate multiple layers and hierarchies of data sets lets us analyze differences between group means as well as deeper relationships, such as variation among and between groups observed variance of a variable is partitioned into components attributable to different sources of variation very well suited for biomedical research because can evaluate interaction between variables tests if means of groups are significantly different; behaves as a t-test on 3 + means (groups/variables) compares means by detecting differences in variance between the 3 + groups-- does not compare means directly, uses variance as tool to evaluate structure and similarity between population samples Assumes sample is randomly selected from populations, independent from each other, and the response variable in each group is normally distributed Parametric! looks at between group variation, within group variation, and total variation uses F statistic tells us if there is a difference, but not where the difference is; need post hoc testing for this! many types of anovas- model can be adapted to a multitude of dif hierarchies or data set architectures *be cautious using data from multiple observations (time series etc)- may violate independence assumption, models do exist for this though *means of distributions can vary, but they must have the same variances; there are non parametric methods if not *sum of squared deviation of mean = major player Steps: 1- define H0 and Ha 2- state alpha 3- calculate dfs 4- state decision rule 5- calculate test stat 6- state results 7- state conclusion each level of an anova "explains" a portion of the variance, but this does not imply a cause effect relationship
Two Way ANOVA
Factorial ANOVA Tests means between different nominal variables compares mean differences between groups that have been split by 2 independent variables/factors Use if each value of one nominal variable is found in combination with each value of the other nominal variable can be used to determine if there is interaction between the two independent variables on the dependent variable (interaction term tells if the effect of one of the independent variables on the dependent variable if the same for all values of your other independent variable Often done w replication (more than one observation for each combo of nominal variables) Can do without replication, but lose info and must assume there is no interaction 1 measurement variable 2 nominal variables (factors/main effects- found in all possible combos) Variables have levels, compare multiple levels of factor. partition variance to find out which factor is contributing most to variance does 3 distinct tests: main effect of factor a, main effect of factor b, and interaction of a and b (each test has separate F stat and H0) (results of each test are independent of each other) Assumes: independence normality homoscedasticity (within each group) H0's (3) 1 = the means of the observations grouped by one factor are the same (the means of the measurement variable are equal for dif values of the 1st nominal variable) 2 = the means of observations grouped by the other factor are the same (the means are equal for dif values of the 2nd nominal variable) 3 = there are no interactions between the two factors/the effects of one nominal variable do not depend on value of other nominal variable (cannot test this without replication) How: with replication- if sample sizes are equal, calculate MS for each factor, for the interaction, and for the variation w/in each combo of factors. then, calculate F stat *if unbalanced design, much more complex without replication- calculate MS for each factor and total MS (by considering all observations as single group). then calculate remainder/error/discrepancy MS by subtracting 2 main effect MS from total MS. Then calculate F stat (MSmain effect/MSremainder) *if there IS interaction, look at each factor separately using one way ANOVA to see where interaction is coming from *each grouping extends across the other grouping- if not, then use nested anova Parametric Graph on 3D graph w measurement on Y, one nominal on X and one nominal on Z OR do bar graph with bars clustered by one of the nominal variables, and the other identified using color/pattern (group by less interesting factor) *if have only 2 values of interesting nominal variable and no replication, can use paired t test (mathematically identical) *if ave one measurement variable and 2 nominals w one nominal under the other, use nested anova
Model I ANOVA
Fixed effect model Type of one way anova Different groups are interesting and you want to know which are different from each other ex) comparing males and females
Under heterogeneity of means
For one way anova rejected H0 the within group mean square is still an estimate of the within group variance, but among group mean square estimates the sum of the within group variance plus the group sample size times the added variance among groups therefore, subtracting the within group mean square from the among group mean square and dividing this dif by the avg sample sizes gives estimate of added variance component among groups: among group variance = (MSamong - MSwithin)/N0 N0 = (1/(a-1)) x (sum(ni) - sum(ni^2))/(sum(ni)) ni = sample size of each group a = number of groups
Under homogeneity of means
For one way anova under H0 the among group mean square and within group mean square are both estimates of the within group parametric variance
Nested ANOVA
Hierarchical ANOVA Tests if there is significant variation in means among groups, among subgroups within groups, etc (depending on # of levels) 1 measurement variable 2 + nominal variables Nominal variables are nested; each value of one nominal variable (subgroup) is found in combination with only ONE value of the higher level nominal variable (groups) allows us to account for replications in data *all lower level grouping must be random effects (model II) variables- they are random samples of a larger set of possible subgroups an extension of one way anova where each group is divided into subgroups *if subgroups are distinctions with some interest, do NOT used nested anova; *in nested anova, often only interested in H0 about group means, may not care if subgroups are significantly dif and therefore may want to try ignoring subgroups and doing one way anova but DO NOT because will violate assumption of independence and gives pseudoreplication-- instead, take avg of each subgroup and analyze using one way anova (if do this, cannot compare variation among subgroups to variation within subgroups- among subgroups usually not biologically interesting anyways, but useful in optimal allocation of resources) Assumes: independence normality homoscedasticity (within each group) H0 = the means of each group/subgroups are the same (repeated for each level!) Calculate SS's, MS's, use these to calculate F stat = MSamong/MSwithtin 2F *Helpful test for designing future experiments due to the partition of variance into levels (tells if more variation is among or within subjects- so tells if you should do more measurements per subject, or more subjects and fewer measurements in next experiment) often used because higher level groups are more expensive, so want optimal balance of expensive and cheap- but if higher level groups are cheap, don't do nested, can just do two way ANOVA on higher level groups
Kruskal Wallis Test
Non parametric alt for One Way ANOVA Tests if mean ranks are the same in all groups Substitute ranks in overall data set for each measurement variable (smallest value =1, etc, ties get avg rank). Calculate sum of ranks for each group, then calculate test stat Test stat essentially represents variance of ranks among groups 1 measurement variable (ordinal or made to be ordinal) 1 nominal variable Assumes: independence (within each sample AND mutual independence among various samples) homoscedasticity (within each group)- dif groups have same shape of distribution measurement term is at least ordinal (data can be ranked) H0 = the mean ranks of the groups are the same/the samples come from pops with the same distributions (some say samples come from pops w equal medians) Dif H0 than anova!!
Friedman Test
Non parametric alt for Repeated Measures ANOVA Used for analyzing randomized complete block designs Extension of the sign test when there are more than two treatments Rank scores within each row, sum the ranks within each column, and use these sums to calculate test stat 1 measurement variable 1 nominal variable with at least two values that are dependent Assume: at least 2 experimental treatments rows are mutually independent (results w/in one row do not affect the results w/in the other blocks) data can be meaningfully ranked H0 = the treatments have identical effects
Duncan's Test
Post Hoc test based on comparison of the range of a subset of sample means with a calculated least significant range; if range of subset exceeds least significant range, pop means are considered sig dif sequential test; subset with largest range compared first, and so on- once a range is not significant, stop comparing subsets therefore, to conduct pairwise comparisons, sample means must be ordered by size; subset with all groups has largest range- compares smallest w largest: range = dif of sample means, then sequentially compare subsets of all other pairs until one is not significant Least significant range depends on error degrees of freedom, and # of means in subset Least significant range = Rp = rp sqrt(s^2/n) s^2 = error mean square n = sample size for each treatment (if sample sizes not equal, replaced w harmonic mean of sample sizes)
Scheffe Test
Post Hoc test that compares different sets of groups Allows more elaborate comparisons than other tests Uses F distribution (seen as bad?)
Dunnett's Test
Post Hoc test used in studies where control is run along with test groups to see which groups differ from the control group control is identified and multiple comparisons are made to it- can simultaneously compare all active treatments w a control, rather than doing multiple pairwise comparisons designed to hold family wise error rate at or below alpha more powerful than Bonferroni great for repeated measures anova
Tukey Kramer Test
Post hoc test for ANOVA Most commonly used after One Way ANOVA Allows you to look at where the difference actually is after you reject H0 that all means are equal compares dif pairs of means to see which are significantly dif from each other considered a pairwise comparison minimum significant difference is calculated for each pair of means- depends on sample size in each group, avg variation w/in groups, and total # of groups *for balanced design, minimum sig difs are same, unbalanced- smaller sample = bigger minimum sig dif if dif between pair of means is greater than minimum sig dif, then are sig dif often displayed in table with symbols (groups w same symbol are not sig dif) or in a graph with lines for means (lines that overlap are not sig dif)
Bonferroni Correction
Post hoc test that can be used after ANOVA to determine where the difference actually is- do pairwise comparisons with t-tests, and then divide alpha by number of tests (modifies alpha value to account for multiple comparisons/control family wise error rate) good if have fairly small number of multiple comparisons and are looking for one of two significant- if have large number then may lead to high rate of false negatives (too conservative, not powerful)
Holm Test
Post hoc test that can be used for ANOVA or other tests Powerful and versatile Can be used to compare all pairs of means, each mean to a control mean, or pre-selected pairs of means Can only be used to decide which comparisons are significant or not- cannot compute confidence intervals *Holm-Sidak Test is slightly more powerful modified version of this test
Dunn's Test
Post hoc test. Non parametric multiple comparison test. Used if Kruskal Wallis H0 is rejected to further analyze data and see which groups differ
Model II ANOVA
Random effect Different groups are random samples from a larger set of groups; you're not interested in which groups are different from each other- only interested in how variation among groups compares to variation within groups Type of one way anova
ANOVA Table
Source of Variation | Sum of Squares | Degrees of Freedom | Mean Squares | F | Within | SS = sum (x - xbarJ)^2 | df = k-1 | MS = SS/df | F = MSb/MSw Between | SS = sum (xbarJ - xbar)^2 | df = n - k | MS = SS/df | Total | SS = sum (xbarJ - xbar)^2 | df = n-1 |
Stages of partitioning variance
Stage 1: identical to a one way ANOVA; examines the variance between groups and within groups Stage 2: identical to a two way ANOVA; examines the variability in each factor, and in the interaction between factors
One Way ANOVA
Tests if means of measurement variable are same for different groups Used to determine if there are any statistically significant difs between the means of 3+ independent groups (set up from independent samples t test) Calculate the mean observations within each group, then compare the variance among these means to average variance within each group Calculate SS between, SS within, SS total, MS between, MS within MS between/MS within = F stat, compare F stat to critical value found in table using df among, df within, and alpha, and decide whether or not to reject H0 Comparing among group variance to within group variance 1 measurement variable 1 nominal variable (divides the measurements into two or more groups) One factor with at least 2 levels (levels are independent) you make multiple observations of the measurement variable for every value of the nominal variable (anova table for one way indicates among group variance component and within group variance components, and #s add to 100) Assumes: independence normality (w/in each group) homoscedasticity (within each group)- not too sensitive if design is balanced (if not balanced, then will give too many false +, should use Welch's ANOVA instead) H0 = the means of the measurement variable are the same for the different categories of data/the among group variance is the same as the within group variance under H0, the weighted among group variance will be the same as the w/in group variance- as means get farther apart, variance among means increases df for among/between group variance: # of groups - 1 df for within group variance: total # of observations - # of groups df total = # of observations - 1 *df between + df within = df total usually display w bar graph- heights indicate means, and usually use 95% confidence intervals or standard error bars **If only have two groups, use 2 sample T Test instead (will get same p value) **If have 2+ nominal variables, used 2 way anova or nested anova **If data SEVERELY violates assumptions, use Welch's anova or Kruskal Wallis Power analysis: effect size- if mainly interested in overall significance test, sample size is function of std dev of group means; if mainly interested in comparison of means, effect size can be dif between smallest and largest means you want to be significant
Repeated Measures ANOVA
Tests if means of the measurement variable are different across values of the nominal variable often done w time series or treatment series data (as independent variable) experimental design where an observation has been done on the same individual more than one time (usually one at dif times or places and done without replication) almost identical to one way anova, but with one additional calculation to account for shared variability 1 measurement variable 1 nominal variable w at least two values that are dependent (dependent = share variability in some way) (often a hidden nominal variable as well) Assumes: normality homoscedasticity (within each group) H0 = the means of the measurements are equal across values of the nominal variable/all repeats will have the same mean extension of paired t test to account for more than before and after event 5 degrees of freedom dfbetween: a-1 dfwithin: N-a dfsubjects: S-1 dferror: dfwithin - dfsubjects dftotal = N-1 N = total # of measurements a = # of levels S = # of subjects/level table SS df MS F between within subjects error total **like one way anova table, but within is split into subjects and error, because some variability is consistent within subject and some is just due to error Calculate values to fill in table. MS between/MS error = F stat, compare F stat to critical value found in table using df among, df within, and alpha, and decide whether or not to reject H0 *usually, one of the main effects isn't interesting and its H0 is not even reported *dif from two way ANOVA, dependent because only has one nominal variable *advantageous because of how variability is paired out- takes individual error out (within group variability expresses error variability in an independent (between subjects) anova; a repeated measures anova further partitions the error term, reducing its magnitude-- moves most error to between groups instead of within, making the test more powerful
Partitioning the Variance
The goal of ANOVA (two factor...?) Shows which factor is contributing most to the variance being observed: within or between groups? In factor A or factor B? Done in two stages First stage identical to independent samples ANOVA: calculate SStotal, SSbetween, and SSwithin Second stage: partition SSbetween, separate components, differentiate to factor a, factor b, and interaction of a and b Shows which factor is attributing to most of variance Total variability | | | | Between treatments Within treatments | | | factor a factor b interaction axb variability variability variability Each component of variance expressed as percentage of total variance components (anova table for one way indicates among group variance component and within group variance components, and #s add to 100) Very helpful in quantitative genetics where within family component may reflect environmental variation and among family component reflects genetic variation very useful in designing experiments- test if more variance among or between groups, and decide where to focus observations **ONLY applies to model II one way anova (NOT model I)
Welch's ANOVA
Use in place of Kruskal Wallis test and ANOVA if data is heteroscedastic Less powerful Good for designs that are not balanced and small Use Games Howell? post hoc
AMOVA
analysis of molecular variance a method estimating pop differentiation directly from molecular data and testing hypothesis about such differentiation provides comparisons of gene frequencies and mutational differences between different genes
test flashcard 2
are my thing yeah definitely not saving definitely not saving before it saves
How to check normality
calculate residuals (dif between each observation and mean of its group) and plot them on a histogram; if severely non-normal, try data transformation- if still doesn't work use Kruskal Wallis residuals = differences between observed and fitted values (fitted value = group mean)
Optimal Allocation of Resources
can use an equation to estimate best number of observations per subgroup if have estimated relative cost of different parts of the experiment (in time or money) generally used for nested anova, but can be used for one way anova if groups are random effect (model II)
Error Sum of Squares
compare the observations within each treatment/factor to the mean of that treatment/factor
Randomized Block Design
experimental design analyzed by two way ANOVA usually done without replication, but can be done with one group within another group? often happens in agriculture: test dif treatments on small plots within larger blocks of land- because larger blocks may differ in some way that may affect measurement variable, data are analyzed w a two way ANOVA w block as one nominal variable and treatment as other treatments assigned at random
Mean Squares
sum of squares divided by degrees of freedom versions for between groups and within group MS within groups = the "error" mean square - error is just a result of real, biological variation among individuals If variation (MS) among groups is high relative to variation (MS) within groups, test stat will be large and therefore unlikely to occur by chance (MSamong/MSwithin = F)
Interaction Term
tells you if effects of one factor depend of the other factor if it is significant- do NOT test the effects of the individual factors (w the anova) because it is only true for some groups- you CAN look at the effects of each factor separately using a one way anova
Within group variation
the variance within in each group- on graph, is the difference between the widest points of a group error term/group because reflects overall variation between groups
Post hoc testing table
| Post hoc | Planned Comparison| Purpose. | exploratory-test the| confirmation of dif | dif between all | theory or hypothesis | potential combos | you only test the groups | of groups | you expect to be dif in | a specific direction Risk of. | very low. unlikely. | low. not quite as type I. | to generate a false +. | conservative as a error | | post hoc, but | conservative enough Risk of. |. high. not unlikely to. | lower risk than post hoc type II. | generate a false - | test. more sensitive/ error. | lacks power | likely to uncover dif. has | more power