statistical analysis of differences
non parametric tests
1. a test that does not require the population's distribution to be characterized by certain parameters 2. requires rankings and/or frequencies
what is ANOVA?
1. analysis of variance 2. total variance = between group + within group variance 3. it is a test (F-test) that only tells you at least TWO groups differ, but not which ones differ
parametric tests
1. assumptions are made about the distribution of the population values 2. use mean, SD, etc 3. assume samples are from a normally-distributed population ex. t-test, ANOVA
one-way ANOVA with repeated measures
1. assumptions based on the difference score 2. if significant, run paired t test as post hoc (with bonferroni adjustment)
critical values of t-score
1. critical t score value - harder to reach significance 2. SD value - easier to get a big t score pros overpower the cons when using a paired t test (dependent)
interpretation of statistical results in factorial design?
1. is there a significant interaction? 2. if no interaction, check if the main effect of any IV is significant -if yes, perform proper post hoc pairwise comparison for the significant IV by combining data of all levels of the other IV 3. if there is a significant interaction? main effect of any IV is useless -perform proper post hoc analyses to examine the "simple main effects" (to examine the effect of IV1 for each level of IV2 separately)
why are parametric tests preferred if possible?
1. more powerful if normality assumption holds 2. can refer to the collected data when interpreting the results
nonparametric tests
1. no assumptions about the distribution of the population 2. use rank/frequency ex. Mann-Whitney, Kruskal-Wallis, etc
2 -way clinic-by-intervention
1. no interaction 2. IV1 main effect 3. IV2 main effect
requiring rankings and frequencies
1. nominal and ordinal 2. interval and ratio can be converted (via ranking)
two types of statistical tests
1. parametric 2. nonparametric
differences between two independent groups
1. parametric - independent t test 2. non parametric - mann-whitney and chi-square
differences among 3 or more dependent groups
1. parametric - one-way ANOVA with repeated measures 2. non parametric - friedman's ANOVA or mcnemar test
differences between two dependent groups
1. parametric - paired t test 2. non parametric - wilcoxon signed rank test and mcnemar test
differences between 3 or more independent groups
1. parametric tests - one way ANOVA 2. non parametric tests - kruskal-wallis test (if significant, run Mann-whitney with bonferroni adjustment) and chi-square
level of measurement
1. parametric tests require data for which means and SD can be calculated 2. interval and ratio data (maybe ordinal but not recommended) - no nominal data
parametric test assumptions
1. sample data are normally-distributed 2. homogeneity of variance 3. level of measurement
friedman's ANOVA
1. the statistic is run on the ranks of the difference scores 2. if significant, run wilcoxon post hoc (with bonferroni adjustment)
if more than 1 IV....
2-way, 3-way ANOVA etc
mann-whitney test
for 2 independent samples
wilcoxon
for 2 paired samples
chi-square test of association
for nominal data: to compare the difference in # of subjects achieving a goal
types of dependent data
identical twins, left vs right limb of same person, intervention 1 vs intervention 2 by the same person
a significant ANOVA?
indicates there are differences among the groups
shape in non parametric
is skewed, unequal variances, etc
disadvantage of dependent
loss of DF (degree of freedom) to a higher statistical threshold score
slides 32-34
main effect and interaction
commonly used non parametric tests
mann-whitney test, kruskal-wallis, wilcoxon
kruskal-wallis
more than two groups
if normality does not hold or for ordinal data...
non parametric tests are more appropriate
one way ANOVA
one way means only 1 independent variable (IV); can be many levels though
paired t test
or dependent t test 1. CHECK the difference score (one score minus the other) not the raw score alternative hypothesis
example of dependent samples
outcome of the same patients at 3 weeks, 6 weeks, and 6 months after surgery
example of independent samples
patient outcome in clinic 1 vs clinic 3
advantage of dependent
reduce data variability (SD of the difference instead of all raw data)
what to do when the data are not normal?
run a nonparametric test (transform the data to make them more normally distributed) - try square root, square, etc
what to do if homogeneity of variance is violated?
run nonparametric tests
factorial design
N-way ANOVA (repeated measure or not)
independent t test
alternative hypothesis
what if ANOVA is significant?
determine which groups differ by running the t test for all post hoc pairwise if no specify comparisons of interest----Tukey's (most common)
wilcoxon signed rank test
the statistic is run on the ranks of the different scores
parametric vs nonparametric tests
there is not a large loss of power in using nonparametric tests compared to parametric tests even when the normality assumption holds
homogeneity of variance
this means that the population variances of the groups being tested is equal
mcnemar test
to compare the difference in # of subjects achieving a goal
chi-square test
to compare the difference in # of subjects achieving a goal (nominal)
mcnemar test
to compare the difference in # of subjects achieving a goal among multiple repeated testing conditions (nominal)
sample size for non parametric
too small for asymptotic distributions (cannot use central limit theorem)
mann-whitney test
when assumptions of parametric tests are violated
kruskal-wallis test
when assumptions of parametric tests are violated; if significant, run mann-whitney as post hoc test to determine which groups are different (need to apply bonferroni adjustment)
dependent samples
when values in one set are related (dependent) to their corresponding values in another set
independent samples
when values in one set have no relation (independent) to values in the other set