Confidence Intervals, Effect Size and Statistical Power

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Calculating Confidence Intervals 95% and 90%

Given μ= 20.4 and σ = 3.2 N= 30 and μ m = 17.5 σm = σ/√N = 3.2/√30 = 0.584 M lower = -z(σm) + M sample = -1.96 (0.584_) + 17.5 = 16.36 and M upper = -z(σm) + M sample = -1.96 (0.584) + 17.5 = 18.64 The 95% confidence interval around the mean of 17.5 is [16.36,18.64] To calculate 90% CI for same data find the z values that mark off the most extreme 0.05 in each tail which are -1.65 and 1.65 then calculate the upper and lower interval as: M lower = -z(σm) + M sample = -1.65 (0.584_) + 17.5 = 16.54 and M upper = -z(σm) + M sample = -1.65 (0.584) + 17.5 = 18.46 The 90% confidence interval around the mean of 17.5 is [16.54,18.46]

Effect Size definition

In everyday language, we use the word effect to refer to the outcome of some event. Statisticians use the word in a similar way when they look at effect sizes.They want us to assess a given outcome that has any change in a dependent variable and the event creating the outcome in a dependent variable. When statisticians calculate an effect size, they are calculating the size of an outcome.

Statistically significant

Increasing sample size always increases the test statistic if all else remains the same.Because of this, a small difference might not be statistically significant with a small sample but might be statistically significant with a larger sample. A real difference does not man it is a large or meaningful difference. Therefore, when you hear statistical significance do not interpret this as an indication of practical importance. There may be a statistical significant difference between group means but the difference might not be meaningful or have a real-life significance. A statistically significant difference just indicates that the difference between the means is unlikely to be due to chance. It does not tell us there is no overlap in the distributions of the two populations being compared. Drawings of distributions will vary; the two curves will overlap but the means of the distribution may show overlap.

Standard Significance and Practical Importance

Means that the observation meets the standard for special events typically something that occurs less than 5% of the time. Practical importance means that the outcome really matters

Five factors that affect statistical power

Researchers often use a computerized statical power calculator to determine the appropriate sample size to achieve 0.80 statistical power. It is important to know these affects when reading research and always ask whether there is sufficient statistical power to detect REAL findings. Statistical power is affected most directly by sample size, but is also affected by other factors: 1. Increase the alpha. Is like chasing the rules by widening the goals. The side effect is also increasing the probability of a Type I error from 5% - 10%. Researchers rare choose to increase statical power in this manner. 2. Turn a two-tailed hypothesis into a one tailed hypothesis. A simple one-tailed test provides more statistical power, while a one-tailed test is more powerful it is best to use a two-tailed conservative test. 3. Increase the N (italic) or sample size will lead to a increase in the test statistic as an easy way to increase statistical power. 4.Exaggerate the man difference levels of the independent variable. For example using a longer length program leading to a larger change in means than a shorter program. 5. Decrease the standard deviation by using reliable measures from the beginning of the study to reduce error or sampling from a more homogeneous group in which participants responses are more likely to be similar to begin with.

Statistical Power Calculation

Step 1. Determine the information needed to calculate statistical power - the hypothesized mean for the sample; the sample size; the population mean; the population standard deviation; and the standard error based on this sample size. To calculate Statistical power for a z test, we must know several ingredients before beginning a test for 9 students with mean number of of 6.2 and mean number of session as 4.6 and pop. STDEV of 6.2 and sample mean of 1.6 (Cohen's d of 0.5). Mean of population 1: M (italic) = 6.2 Planned sample size: N (italic) = 9 Mean of population 2: μm = μ = 4.6 Standard Deviation of population: σ = 3.12 Standard Error w/ the planned sample size: σ/√N = 3.12/√9 =1.04 Step 2. Determine a critical value in terms of the z distribution and the raw mean so that statical power can be calculated. M= 1.65(1.04) +4.6 = 6.316 Step 3. Calculate the statistical power - the percentage of the distribution of means for population 1 with the distribution centered around the hypothesized sample mean that falls above the critical value. z= (6.316 - 6.2)/1.04 = 0.112 Look up on the z static on the z table to determine the percentage above a z statistic of 0.112.

Meta-Analysis Calculation

Step 1. Select topic of interest, decide exactly how to proceed before beginning to track down studies. Step 2. Locate every study that has been conducted and meets the criteria. Step 3. Calculate an effect size, often Cohen's d for every study. Step 4. Calculate statistics - ideally, summary statistics, a hypothesis test, a confidence interval and a visual display of the effect sizes. Calculate a mean effect size for all studies as a goal of the meta-analysis. Apply all of the statistical insights of means, medians and standard deviation, confidence intervals and hypothesis test and graphs. The goal of hypothesis testing with meat-analysis is to REJECT the null hypothesis that the mean effect size is 0.

Confidence Intervals Calculation Steps

The symmetry of the z distribution makes it easy to calculate confidence intervals. Step 1. Draw a picture of a distribution that will include the confidence interval. Draw the normal curve with a sample mean at the center, INSTEAD of the population mean. Step 2. Indicate the bounds (percentages) on the drawing; within and beyond the confidence interval. Draw a vertical line from the mean to the top of the curve, with two small vertical lines to indicate the middle 95% of the curve (2.5% in each tail for a total of 5%). Step 3. Determine the z statistics that fall at the line marking the middle 95%. Turn back to the z TABLE. The percentage between the mean and each of the z score is 47.5%. When you look up this percentage in the z table you find a z statistic of 1.96 (Identical to the cutoffs for the z test because the p level of 0.05 corresponds to a confidence level of 95). ADD the z statistics of -1.96 and 1.96 to the curve. Step 4. Turn the z statistics back into the raw means. Identify the appropriate mean and standard deviation. THESE ARE VERY IMPORTANT! First, cent the interval around the sample mean (Not the population mean). Second, because this is a sample mean, use a distribution of means. Calculate standard error as a measure of spread: σm = σ/√N =___________ NEXT: Using this mean and standard error, calculate the ram mean at each end of the confidence interval and add them to the curve. M lower = -z(σm) + M sample = -1.96 (____) +_____ =_______ and M upper = -z(σm) + M sample = -1.96 (____) +_____ =_______ Step 5. Check that the confidence interval makes sense. The sample mean should fall exactly in the middle of the two ends of the interval. You have a match.

Intervals and Overlap

When two intervals, do not overlap (p. 188) we conclude that the population means are likely different. However; when two intervals do overlap it is plausible that the two are perceived as equal in the population. ALERT: the terms margin of error, interval estimate and confidence interval all represent the same idea.

Effect Size and Standard Deviation

When two population distributions decrease their spread the overlap of the distribution is less and the effect size bigger.

Effect Size and Mean Differences

When two population means are farther apart, the overlap of the distribution is less and the effect size is bigger.

Effect Size

indicates the size of a difference and is unaffected by sample size. Effect size tells us how much two populations do not overlap. The less overlap, the bigger the effect size. The amount of overlap between two distribution can be decreased in two ways. FIRST: overlap decreases and effect size increases when means are further apart. SECOND: overlap decreases and effect size increase when variability within each distribution of scores is smaller. We use scores instead of means to calculate effect size. If two distribution overlap a lot, then you would probably find a small effect size and not be willing to conclude that the distribution are necessarily different. If the distributions do not overlap much, this would be evidence for a larger effect or a meaningful difference between them. Effect size is a standardized value that indicates the size of a difference with respect to a measure of spread but is not affected by the sample size.

Cohen's d

is a measure of effect size that assess the difference between two means in terms of standard deviation, not standard error. Cohen's d allows us to measure the difference between means using standard deviations much like the z statistic. We accomplish this by using standard deviation in the DENOMINATOR rather than using standard error. σM = σ/√N then calculate z statistic: z= (M-μm)/σm then calculate Cohen's d: d= (M-μ)/σ Once you have the effect size is it written like this: d= 0.007 Meaning the two sample means are 0.07 standard deviation apart. Here are the guidelines for that effect: Effect size Convention Overlap Small 0.2 ...........85% Medium 0.5......67% Large 0.8............53% Sometimes a small effect can be meaningful.

Statistical Power

is a measure of the likelihood that we will reject the null hypothesis, give that the null hypothesis is false Statistical power is the probability that we will REJECT the null hypothesis when we should reject the null hypothesis with the probability that we will not make a Type II error. We use the word power to mean either an ability to get something done or an ability to make others do things. Statisticians use the word power to refer to the ability to detect an effect, given that one exists. Calculation of statistical power ranges from 0.00 to a probability of 1.00 (0% - 100%). Statisticians have historically used a probability of 0.80 as the mining for conducting a study. If there is a 80% chance of of correctly rejecting the null hypothesis then it is appropriate to conduct the study. Understanding statistical power means consideration of several characteristics of two population o interest. 1. The population believed to be the sample represented. 2. the population to which the comparing of the sample. Then representing these two populations visually as two overlapping curves. On a practical level, statistical power calculation tell researchers how many participants are needed to conduct a study whose findings we can trust, remembering the is is just an estimate.

Meta-Analysis

is a study that involve the calculation of mean effect size from the individual effect sizes of many studies. Meta-analyisis proves added statistical power by considering many studies simultaneously and helps to resolve debates fueled by contradictory research findings. The goal of meta-anaylisis is to find the mean of the effect sizes from many different studies that ll manipulated the same undefended variable and measured the same dependent variable.

Point Estimate

is a summary statistic from a sample that is just one number used as an estimate of population parameter. So, a mean taken from a sample is a point estimate, a single number used as an estimate for a population parameter. Point estimates are rarely exactly accurate. You increase accuracy by using an INTERVAL ESTIMATE when possible.

Confidence Intervals

is an interval estimated based on a sample statistic; it includes the population mean a certain percentage of the time is sampled from repeatedly. The confidence interval is centered around the mean of the sample. A 95% confidence level is most commonly used, indication that 95% that falls between the two tails (i.e. 100% -5% = 95%). Confidence Interval add details to the hypothesis test. Specifically they tell us a range within which the population mean would fall 95% of the time if we were to conduct repeated hypothesis test using samples of the same siz from the same population. NOTE: The confidence level is 95% but the confidence interval is the range between the two values the surround the sample mean. 1. Draw a normal curve with the sample mean in the center. 2. Indicate the bounds of the confidence interval on either end, and write the percentages under each segment of the curve. 3. Look up the z statistics for the LOWER and UPPEr ends of the confidence interval in the z table. These are always -1.96 and 1.96 for a 95% confidence interval around a z statistic. 4. Convert the z statistic to raw means for each end of the confidence interval. 5. Check your answer; each end of the confidence interval should be exactly the same distance from the sample mean.

Interval Estimate

is based on a sample statistic and provides a range of plausible values for the population parameter. They are frequently used by media often when reporting political polls (Rarely accurate and mostly skewed with bias), I might add) and constructed by adding and subtracting a margin of error from a point estimate. Interval estimates provide a range of plausible values not just one statistic. Interval estimates provede a range of scores in which we have some confidence the population will fall, whereas point estimates use just a single value to describe the population.


Kaugnay na mga set ng pag-aaral

chapter 17 Image formation of the eye

View Set

1.02 Compare the main types of business organizations

View Set

Anatomy Final Homework Questions

View Set

Chapter 13 Post-Class Assignment Part II: The Costs of Production

View Set

IHI: PS 103: human factors and safety

View Set

Connective, Muscle and Nervous Tissue/Membranes

View Set

H Nutrition: End of Carbohydrates

View Set