Statistical Power

Ace your homework & exams now with Quizwiz!

What is power expressed as?

- A probability or percentage

Statistical power (of a research study)

- The probability that the study will produce a statistically significant result if the research hypothesis is true When the research hypothesis is false, you do not want to get a significant result (= Type 1 Error). However, may not get a significant result even when the research hypothesis is true because the selected sample may turn out to not be extreme enough to reject the null.

How do you figure out statistical power?

- Use a power software package or Internet power calculator Researcher puts in the values for the various aspects of the research study (such as the known population mean, the predicated population mean, the population standard deviation, the sample size, the significance level and whether the test is one- or two-tailed). OR using power tables because figuring power by hand is very complicated

What is a non-significant result a fairly strong argument for when there is high power?

- against the RH (but that does not mean all version of the RH are false) ???? OR - there is less of an effect than was predicted when figuring power

What are some of the 'less important' influences on power (other than effect and sample size)?

1. Significance level (alpha) - Less extreme significance levels (p < .10 or p < .20) mean more power because the shaded rejection area on the lower curve is bigger, leading to more of the area in the upper curve being shaded. More extreme significance levels (p < .01 or p < .001) mean less power. However, with a low significance level we increase the chance of making a Type I error. Also with a high significance level, we increase the chance of making a Type II error. 2. One- VS Two-tailed tests. - Using a two-tailed test makes it harder to get significance on any one tail -> power less with a two-tailed test 3. Type of hypothesis-testing procedure - Sometimes the researcher ha s a choice of more than one hypothesis-testing procedure to use for a particular study

What can you conclude about effect size if you have a very small sample size?

The effect size is likely large

SEE TABLE 4 ON PAGE 22 IN ACA for a summary of influences on power

do it!! Effects of... - Effect size - Predicted difference between populations means - Population standard deviation - Sample size - Significance level - One-tailed versus two-tailed test - Type of hypothesis-testing procedure used

What is an acceptable level of power to undergo a study?

- 80% (there is an 80% chance that the study will produce a statistically significant result if the research hypothesis is true) But, the more power the better - however, the costs of greater power (ex: studying more people) often make even 80% beyond reach

What can we assume if the sample of a study was small?

- A statistically significant result is also practically important (with a small sample size only way to get a statistically significant result using standard procedures is if the effect size is fairly large) But if the sample size is very large, must consider the effect size directly.

Low power study

- A study that has only a small chance of being significant even if the research hypothesis is true

Why is determining the power very important when planning a study?

- Because if the power is low, even if the research hypothesis is true, the study will probably not give statistically significant results. Thus, the time and expense of carrying out the study, would probably not be worthwhile

How does a researcher figure out the number of participants needed to get a result with power?

- Begin with the level of power wanted and figure out how many participants needed to get to that level of power In practice, researchers use power software packages, Internet power calculators or special power tables that tell them how many participants they need in a study to have a high level of power, given a certain predicted effect size.

What is the difference between statistical significance and practical/clinical significance?

- Clinical significance refers to a result that is big enough to make a difference that matters in treating people

What is rarely mentioned in research articles?

- Decision errors

How do you figure power in advance of doing a study?

- Find the difference between the the means of the two populations -> the difference between the known population mean (Population 2) and the researcher's prediction for the population to be given the experimental manipulation (Population 1). Population standard deviation known in advance. Prediction based on a precise theory, on previous experience with research of this kind, or on what would be the smallest difference that would be useful

Why is statistical power important?

- Helps you decide how many participants you need - Important when reading a research article -> making sense of results that are not significant or are statistically significant but not practically significant.

How do studies which fail to support the research hypothesis differ?

- If there is low power -> the study is inconclusive - if there is high power -> the RH is false or there is less of an effect than was predicted when figuring power

When is effect size most commonly reported?

- In meta-analyses -> when results from different articles are being combined and compared

What does significance tell us?

- It means that you can be pretty confident that there is some real effect, but it does not tell you much about whether that real effect is significant in a practical sense.

What are the two ways that significance tests are seriously misused?

- Non-Significant results are interpreted as showing there is in fact no effect -> nonsignificant results could be due either to little or no true effect or to the low power of the experiment - Significant results (the strength of the evidence that we have a non-zero effect) are interpreted as being "important" results (significance confused with effect size)

What is not true about p levels?

- That the more extreme the p level is, the larger the effect size will be (although the more extreme p level, the stronger the evidence is for a non-zero effect size) Small p level could be due to a large effect size OR a large sample size

What can you conclude about effect size if you have a very large sample size?

- The effect size can be large or small

How do you calculate the predicted mean of population 1 (population to which the experimental procedure will be applied)

- The known population mean plus the result of multiplying the predicted effect size by the known population standard deviation

When do you mainly think about power?

- When planning research -> major topic in grant proposals requesting funding for research and in thesis proposals An estimate of the power of a planned study is usually required in an application to a university's human subjects review board for permission to conduct the planned study - In the final section of a research article, when the author discusses the meaning of the results or in discussions of results of other studies. Emphasis tends to be on the meaning of nonsignificant results. Usually explained in some detail because psychologists have been slow to be knowledgeable about power.

How do you increase the power of a planned study?

1. Increase effect size -> by increasing the predicted difference between population means = using a more intense experimental procedure OR making the instructions more elaborate, spending more time explaining them, perhaps allowing time for practice, and so forth (--- wait in one study compared to another??? is this fair!!) However, this may lead you to use an experimental procedure that is not like the one to which you want the results of your study to apply (not practical or may distort study's meaning). It can also be difficult or costly. 2. Increase effect size -> by decreasing the population standard deviation = study a population that has less variance/diversity within it than the one originally planned (but may not be available & then your results will only apply to the more limited population -> decreases generalizability) OR use conditions of testing that are more standardized and measures that are more precise (however not always practical) ex: testing in a controlled laboratory setting or using measures and tests with very clear wording 3. Increase the sample size (the mean way to raise power in a research study) But, sometimes a limit to how many are available (ex: studying billionaires). Also using a larger sample size adds to the time and cost of conducting the research study. 4. Use a less extreme level of significance ( p < .10 or p < .20) = The least extreme that reasonably protects against Type I errors (normally p < 0.05 in psychology). Increases alpha (probability of Type 1 Error). If you lower the significance level, since most psychology research journals consider p < 0.05 the standard cutoff for determining statistical significance, you might not get published. 5. Use a one-tailed test = whether one/two tailed depends on the logic of the hypothesis being studied (& in many areas of psychology one-tailed tests are frowned upon under any circumstance). If the result comes out opposite to predictions would have to be considered nonsignificant. Have little choice about this. 6. Use a more sensitive hypothesis-testing procedure = fine if alternatives are available. However, usually, the researcher begins with the most sensitive method available.

What are two questions to ask when judging a study's results?

1. Is the result statistically significant? -> if yes, then a real effect 2. Is the effect size large enough for the result to be useful or interesting? if yes, then the result is useful or interesting However, sometimes, when testing purely theoretical issues, it may be enough to just be confident that there is an effect at all in a particular direction.

What is "power" the opposite of?

Beta (conclude statistical insignificance when it is statistically significant) Beta + power = 100% Beta = 100% - power

What does the statistical power of a study depend on?

Main factors: (1) How big an effect (the effect size) the research hypothesis predicts (mean difference between population 1 and population 2, research and null). Effect size influenced by standard deviation (smaller SD = greater effect size, because the two distributions overlap less) and the difference between the means (large difference between means = greater effect size, because the two distributions overlap less) AND (2) How many participants are in the study (the sample size). Larger population = more power, because standard deviation of the distribution of means becomes smaller. In most cases, sample size matters even more than effect size (& is a factor over which the researcher usually has more control). Sample size has nothing to do with effect size, but both effect size and sample size affect power. ALSO affected by the significance level chosen, one-tailed or two-tailed test, and kind of hypothesis-testing procedure used

What does effect size mean in studies?

One group is .x standard deviations higher or lower in something than something else

Should psychologists report significance or effect size?

Proponents of effect size - Effect size provides information that can be compared to other studies & used in accumulating information over independent studies as research in a field progresses. They are crucial in meta-analysis. - Significance tests often miss the point - in psychology we don't care whether a result is non-zero Proponents of significance testing: WITH effect size - When a sample size is small, it is still possible for a study to come out with a large effect size just by chance (but how???) - Also when an effect size is small, need to know whether the result should be trusted as very unlikely if the null hypothesis were true WITHOUT effect size - For applied research -> effect size is a good idea because want to know the actual amount of effect a particular program has or how big the actual difference between two groups is - For theoretical research -> effect size can be irrelevant and misleading Only care about (a) if the theory of a difference between populations was based on theory (b) the results were consistent with what was predicated (as shown by the statistical significance) and (c) the theory was supported. However, if many studies done using similar procedures but with meaningful different aspects, may be valuable to compare effect sizes. Prevailing view among statistic experts in psychology: - Even in theoretically oriented research, the potential loss of including effect size is offset by the usefulness to future researchers of having such information to help them in figuring power when planning their own studies & for future meta-analysts.

Power table

Table for a hypothesis testing procedure showing the statistical power of a study for various effect sizes and sample sizes

Alpha

The probability of getting a significant result if the research hypothesis is false (probability of a Type 1 Error)

When a result is not significant, what can you conclude about the truth of the research hypothesis if the study had (a) a very large sample size?

The research hypothesis is probably false.

When a result is not significant, what can you conclude about the truth of the research hypothesis if the study had a very large sample size?

The research hypothesis is probably not true (or has a much smaller effect size than predicted)

What is usually not a good idea?

To compare the significance of two studies to see which has the more important result BECAUSE a small number of participants that is significant at the 0.05 level is likely to have a large effect size, while a study with a large number of participants that is significant at the 0.001 level might have a small effect size

When a result is not significant, what can you conclude about the truth of the research hypothesis if the study had a very small sample size?

You can conclude very little about the truth of the research hypotheses (inconclusive)


Related study sets

Intro to Human Biology (Test 2 study guide)

View Set

Lab 6. Petrology of Basaltic Rocks

View Set

Neurobiology Exam 1 (Chapters 1-5)

View Set

Chapter 2 Savings - Multiple Choice

View Set

SPA 3074: Ch. 6 Diversity in Gender Identities and Sexualities

View Set