Statistical Power
power is the opposite of ________
beta (type 2 error; false negative) beta is the probability that you will NOT get a statistically significant result when the research hypothesis is true, so power is the exact opposite of this thus, beta + power = 100
factors affecting power
effect size: larger expected difference between populations leads to larger power -effect size effected by difference b/w means and variance of populations sample size: a larger sample size also increases power -decreases SD of DOM, leading to bigger effect size -matters more than effect size and researcher has more control over this than effect size -also affected by significance level chosen (more lenient p level means more power), whether a 1 or 2 tailed test is used (2 tailed has less power bc stricter cutoffs), and the kind of hypothesis-testing procedure used
football concussions study
found that these players were healthier on average but at significantly higher risk of neurodegenerative diseases non-lineman were at higher risk then linemen because they suffered hits at much higher impact velocities, increasing concussion risk overall player mortality was actually reduced (healthy guys) but neurodegenerative disease mortality was increased significant relationship, but not causal
most impressive result of a study
having very large effect size -having very high power shouldn't be impressive, it should be the norm -having a very small p value (and thus very small likelihood of type 1 error / false positive) is nice but could be found with just a large enough sample size
interpreting insignificant results based on power level
high power level: insignificant result means research hypothesis likely is not true low power: insignificant result is really just inconclusive (very possible there is an effect bit it just wasn't found)
Smartphone use and social interactions
multi-tasking by using a phone is a major source of distraction, leaving people unable to concentrate fully on their primary activity may decrease boredom, make time pass quicker, and give greater sense of control, but definitely a negative relationship between presence of phones and quality of social interactions
statistical power
probability that the study will give a significant result if the research hypothesis is true the probability that find an effect when it actually exists power is the opposite of beta (which is the probability of not getting. significant result when the research hypothesis is true) -thus, power + beta = 100 low power means a study has a small chance of being significant when research hypothesis is true -generally, researchers want at least 80% power to declare a study worth conducting and worth the cost use a power table to determine power of potential study based on various effect sizes and sample sizes power depends mainly on the effect size predicted by the research hypothesis and the sample size used -such that a larger expected difference b/w populations (larger effect size) gives larger power and a larger sample size gives thus larger power -also affected by significance level chosen, whether a 1 or 2 tailed test is used, and the kind of hypothesis-testing procedure used
Statistical vs. Clinical Significance
result can be statistically significant in that it was likely not due to chance but at the same time not be clinically significant if the effect size is so small that it has no practical consequences when applied to real life statistical significance is asking if there is a real effect while clinical significance is asking if there is a useful effect *small samples that are statistically significant are usually also clinically significant bc they need a large effect to be statistically significant *large samples that are statistically significant: must check effect size *even the most minuscule effect can achieve statistical significance if a large enough sample size is used don't confuse p value for chance that a result is clinically significant
minor factors affecting power
significance (p) level: a more lenient (larger) p level gives higher power bc significant result is more likely, but type 1 error (false positive) is also more likely 2 tailed tests give less power than 1 tailed tests because they have stricter cutoff z-scores and so significant result is less likely type of hypothesis-testing procedure used
power table
table for a hypothesis testing procedure showing the statistical power of a study for various effect sizes and sample sizes power depends on the effect size predicted by the research hypothesis and the sample size used -such that a larger expected difference b/w populations (larger effect size) gives larger power and a larger sample size gives larger power -also affected by significance level chosen (more lenient p level means more power), whether a 1 or 2 tailed test is used (2 tailed has less power bc stricter cutoffs), and the kind of hypothesis-testing procedure used
graphical representation of power
take z-score cutoff from general population and apply it to distribution of means for experimental population -percentage of curve above that cutoff is power power is the part of the curve above the cutoff score on the research hypothesis. -if it was null hypothesis, that would be alpha