Research Methods

Ace your homework & exams now with Quizwiz!

Probability (P) Value

A p-value is the estimated probability of rejecting the null hypothesis (H0) of a study question when that hypothesis is true. If P<.05 then results are statistically significant. If not, then results are statistically nonsignificant.

Alpha

Also known as significance level. Normally set to .05, which means that we may reject the null hypothesis only if the observed data are so unusual that they would have occurred by chance at most 5 percent at a time. The smaller the alpha, the more stringent the standard is.

Type II Error

Failing to reject the null hypothesis when it is really false. False negative. Ex. Woman is pregnant, but pregnancy test says she's not. Can control for this by setting alpha at a small value. Power = (1- B) (where B = probability of a Type II error) When B is low, power is high. How do we lower B? Narrow our sampling distribution. How do we do that? Increase our N. How to avoid: 1. Get more subjects (increase your N), 2. Design your measures well (i.e., sensitive). alpha < beta when alpha is set lower, beta will be higher because it makes it more difficult to find data that are strong enough to allow rejecting the null hypothesis, and makes it more likely that weak relationships will be missed.

Type I Error

False positive. Falsely reject the null hypothesis. Ex. Woman is not pregnant, but pregnancy test says she is. Type I error = alpha (0.05) How to avoid: Large sample size -> less chance of any individual one being an outlier 1. Set a small alpha 2. Don't run lots of little tests when one big one will do (the more tests you run, the greater the probability of one of them being significant by chance). 3. Planned comparisons (run specific t-tests if that's what you're interested in). Usually, we're more worried about Type I (that is why we set alpha so low)

Proportion of Explained Variability

In the dependent variable is indicated by the square of the effect-size statistic.

One vs. Two Tailed Tests

One Way - When you measure a single outcome (0.05 alpha) e.g., boys will ascend stairs more rapidly than girl Two Way- When you measure both possible outcomes (0.025 in each end) e.g., boys and girls will differ in their rates of ascent

Statistical Significance

Statistical significance = Effect size x Sample Size Makes it clear that: 1. Increasing the sample size (N) will increase the statistical significance of the relationship whenever effect size is greater then 0. 2. Because the p-value is influences by the sample size, as a measure of statistical significane the p-value is not itself a good indicator of the size of a relationship. 3. The effect size is an index of the strength of a relationship that is not influenced by sample size.

The Null Hypothesis

The assumption that the observed data reflect only what would be expected under the sampling distribution (H0)

Sampling Distribution

The distribution of all the possible values of a statistic.

Power

The probability that the researcher will, on the basis of the observed data, be able to reject the null hypothesis given that the null hypothesis is actually false and thus should be rejected. We like power to 0.80. Often used to determine sample size.

Effect Size

The size of a relationship. Indicated the magnitude of a relationship (0 meaning no relationship between the variables, and larger, positive effect sized meaning there is a stronger relationship). Size Small = 0.10 Medium = 0.30 Large = 0.50

Inferential Statistics

The use of sample data to draw inferences about the true state of affairs.


Related study sets

CITI Model: Research Involving children

View Set

Ch. 15 study guide, The secondary assessment

View Set

NUR 305 Test 6 practice questions

View Set

Chapter 5 Neuroanatomy, Neurophysiology, behavior and Neurotransmitters, receptors, activity

View Set

Medical Terminology TEST 1 chapter 2

View Set

Nclex purple book Antineplastic Meds

View Set