Errors and Statistical Significance

¡Supera tus tareas y exámenes ahora con Quizwiz!

A common value used for alpha is

5% or 0.05. A smaller alpha value suggested a more robust interpretation of the null hypothesis, such as 1% or 0.1%.

B

is the probability of committing a Type II error.

Results that are not statistically significant

may still be important

If p-value <= alpha =

reject the null hypothesis (i.e. significant result).

If p < 0.05 then the observed change/effect is

statistically significant (if you reject the null, you accept the alternative).

Results may be

statistically significant but be clinically unimportant

Data is significant when

the likelihood of a difference being due to chance is less than 5 times out of 100.

Size of the p-value

does not indicate the importance of the results

The probability of making a type I error is

represented by your alpha level (α), which is the p-value below which you reject the null hypothesis. A p-value of 0.05 indicates that you are willing to accept a 5% chance that you are wrong when you reject the null hypothesis.

However, using a lower value for alpha means

that you will be less likely to detect a true difference if one really exists (thus risking a type II error).

Type II errors typically lead to

the preservation of the status quo (i.e. interventions remain the same) when change is needed.

A test statistic to assess "statistical significance" is

performed to assess the degree to which the data are compatible with the null hypothesis of no association.

Example of type 2 error

Again, our null hypothesis is that there is "no wolf present." A type II error (or false negative) would be doing nothing (not "crying wolf") when there is actually a wolf present. That is, the actual situation was that there was a wolf present; however, the shepherd wrongly indicated there was no wolf present and continued to play Candy Crush on his iPhone. This is a type II error or false negative error.

Interpreting the p-value

Ensuring that the difference observed between the sample groups (experimental, control) is not due to chance, you can say whether or not a finding is statistically significant by looking at the p-value

So... What does p <0.05 mean?

The magnitude of the effect observed (e.g. odds ratio) is not due to chance alone. Essentially, p = 0.05 means that one test result out of twenty results would be expected to occur due to chance alone.

P-value

This is a quantity that we can use to interpret or quantify the result of the test and either reject or fail to reject the null hypothesis. This is done by comparing the p-value to a threshold value before, called the significance level.

Alpha =

a probability.

Example of type 1 error

Let's use a shepherd and wolf example. Let's say that our null hypothesis is that there is "no wolf present." A type I error (or false positive) would be "crying wolf" when there is no wolf present. That is, the actual condition was that there was no wolf present; however, the shepherd wrongly indicated there was a wolf present by calling "Wolf! Wolf!" This is a type I error or false positive error.

In other words, there is

a 95% chance (or greater chance) that any difference seen is due to the IV

The p-value reflects

both the magnitude of the difference between the study groups AND the sample size.

The consequences of making a type I error mean that

changes or interventions are made which are unnecessary, and thus waste time, resources, etc.

The p-value is

compared to the pre-chosen alpha value. A result is statistically significant when the p-value is less than alpha. This signifies a change was detected: the default hypothesis can be rejected.

If p-value > alpha =

fail to reject the null hypothesis (i.e. not a significant result).

Type II

fail to reject the null when the null is false. False-.

A type II error

is also known as a false negative and occurs when a researcher fails to reject a null hypothesis which is really false. Here a researcher concludes there is not a significant effect, when actually there really is.

A type 1 error

is also known as a false positive and occurs when a researcher incorrectly rejects a true null hypothesis. This means that you report that your findings are significant when in fact they have occurred by chance.

The probability of making a type II error

is called Beta (β), and this is related to the power of the statistical test (power = 1- β). You can decrease your risk of committing a type II error by ensuring your test has enough power.

A

is the probability of committing a Type I error.

You can do this by ensuring your sample size is

large enough to detect a practical difference when one truly exists.

P-value =

the probability that an effect at least as extreme as that observed could have occurred by chance alone, given that there is no true relationship between exposure and disease (H0 - null hypothesis). The sample estimates of association differ only because of sampling variability. P value indicates how extreme the data are, we compare p to alpha to determine whether the observed data are statistically significant.

Alpha Levels

the significance level is often referred to by the Greek lowercase letter alpha.

If p is greater than alpha (.05)

then we fail to reject the null, and the result is statistically nonsignificant

You can reduce your risk of committing a type I error by

using a lower value for p. For example, a p-value of 0.01 would mean there is a 1% chance of committing a Type I error.

Type I

you reject the null when the null is true. False +.


Conjuntos de estudio relacionados

Test 2 - The Legal Environment: EEO & Safety (8)

View Set

Web Design Mid-term Book Questions

View Set

Module 2 (Unit B): Theories and Therapies

View Set

Mental Health Chapters: 5, 7, 8, 9, 10, 11, 12, 13, 14, 15

View Set

Congrats!!! Last Student Question Quizlet

View Set

Interpersonal Communication C180 (Questions)

View Set