Research8

Ace your homework & exams now with Quizwiz!

Statistical significance equation:

= Effect size x Sample size Increasing N increases statistical significance p-value is influenced by sample size, so it's not a good predictor of effect size - there is a relationship, but it doesn't tell us the size of it. effect size is an index of the strength of the relationship that is NOT influenced by sample size.

The Trade-off between type 1 and type 2 errors:

When alpha is set lower, beta will always be higher. Although setting low alphas protects from type 1 errors, it may lead us to miss weak relationships. Increasing sample size, ability of detecting small relationships increases.

Two-sided p-values

p-values that consider the likelihood that a relationship can occur either in the expected or the unexpected direction. Always twice as big as the one-sided p-value.

How can the power of a statistical test be increased?

...

What are the implications of using a smaller, rather than larger, alpha in a research design?

...

What is the likelihood of a type 1 error if null is true? if it is false?

...

Effect size

A statistic that indexes the size of a relationship (beta). 0 indicates that there is no relationship between the variables. + larger numbers indicate stronger relationships. Because the researcher does not know the effeect size ahead of time, they are only able to make educated guesses. They do this using Cohens d: .1= small .3 = medium, .5 = large Strength of relationship on the IV and DV independent of N.

Hypothesis Flow-Chart:

Develop research hypothesis Set alpha (.05) Calculate power to determine the sample size that is needed Collect data Calculate statistic and p-value Compare p-value to alpha (.05) - p > .05 reject the null - p < .05 accept the null If the results are significant, than an examination of the direction of the observed relationship will indicate whether or not the hypothesis was supported.

Type 2 errors:

Failure to reject the null hypothesis when the null hypothesis is really false. Type 2 errors occur with probability equal to beta.

Inferential statistics

Numbers, such as a p-value, that are used to specify the characteristics of a population on the basis of the data in a sample.

Type 1 errors:

Rejection of the null hypothesis when it is really true. Type 1 errors occur with probability equal to alpha (alpha = .05). We will make a type 1 error not more than five times out of one hundred.

Proportion of explained variablility

The amount of the dependent variable accounted for by the independent variable. Squaring the effect size.

Null Hypothesis

The assumption that observed data reflect only what would be expected from the sampling distribution. Null in a correlational design, r = 0. Null in an experimental design, mean score on the DV is the same in all the experimental groups (IV's). It is not possible to assess whether a hypothesis is correct or incorrect, because we cannot specify ahead of time what the observed data would look like if the alternate hypothesis was true. BUT we can tell what the observed data would look like for the null hypothesis.

statistically nonsignificant

The conclusion to not reject the null hypothesis, made when the p-value is smaller than alpha, p>.05

statistically significant

The conclusion to reject the null hypothesis, made when the p-value is smaller than alpha, p<.05

Sampling distribution

The distribution of all the possible values of a statistic. E.g. - binomial distribution, correlation coefficient distribution, standard deviation distribution.

Beta

The probability of making a type 2 error.

Power

The probability that the researcher will, on the basis of the observed data, be able to reject the null hypothesis given that the null hypothesis is actually false and thus should be rejected. Power is equal to 1 - beta.

Binomial distribution

The sampling distribution of events that have two equally likely outcomes. As sample size gets bigger, distributions become narrower (indicating extreme values of the statistic are less likely to be observed).

Alpha or significance level

The standard that the observed data must meet is alpha - by convention, it is set at .05 - the observed data only could possibly occur 5% of the time by chance. The probability of making a type 1 error.

P-value or probability value

The statistical likelihood of an observed pattern of data, calculated on the basis of the sampling distribution of the statistic. We are comparing p-value to alpha. If p < .05 then we reject the null, significant p > .05 accept the null, not significant

Sample Size compromise:

To reduce the large number of participants needed we: Estimate a type 2 error at beta = .20. This represents a power = .80 and thus, an 80% chance of rejecting the null, and decreasing amount of participants (N) needed.

Reduction of Inferential Errors

Type 1 errors: probability = alpha vs. Correct decision = 1 - alpha Type 2 errors: probability = beta vs. Correct decision (power) = 1 - beta


Related study sets

Chapter 6: Formulating Hypotheses

View Set

Physics I Final - Conceptual Questions

View Set

Basic English Questions (question-answer)

View Set

Chapter 33: Disorders of Renal Function

View Set