AP Stats- Chapter 9: Testing a Claim
Type II Error and the Power of a Test
A significance test makes a Type II error when it fails to reject a null hypothesis H0 that really is false. There are many values of the parameter that make the alternative hypothesis Ha true, so we concentrate on one value. The probability of making a Type II error depends on several factors, including the actual value of the parameter. A high probability of Type II error for a specific alternative parameter value means that the test is not sensitive enough to usually detect that alternative. The significance level of a test is the probability of reaching the wrong conclusion when the null hypothesis is true. The power of a test to detect a specific alternative is the probability of reaching the right conclusion when that alternative is true. We can just as easily describe the test by giving the probability of making a Type II error (sometimes called β).
Carrying Out a Significance Test
A significance test uses sample data to measure the strength of evidence against H0. Here are some principles that apply to most tests: • The test compares a statistic calculated from sample data with the value of the parameter stated by the null hypothesis. • Values of the statistic far from the null parameter value in the direction specified by the alternative hypothesis give evidence against H0.
Using Tests Wisely (1)
Carrying out a significance test is often quite simple, especially if you use a calculator or computer. Using tests wisely is not so simple. Here are some points to keep in mind when using or interpreting significance tests. How Large a Sample Do I Need? A smaller significance level requires stronger evidence to reject the null hypothesis. Higher power gives a better chance of detecting a difference when it really exists. At any significance level and desired power, detecting a small difference between the null and alternative parameter values requires a larger sample than detecting a large difference.
One Sample z-Test for a Proportion
Choose an SRS of size n from a large population that contains an unknown proportion p of successes. To test the hypothesis H0 : p = p0, compute the z statistic Find the P-value by calculating the probability of getting a z statistic this large or larger in the direction specified by the alternative hypothesis Ha: *in the picture the second "p" (p0) is supposed to be plain "p"
Inference for Means: Paired Data
Comparative studies are more convincing than single-sample investigations. For that reason, one-sample inference is less common than comparative inference. Study designs that involve making two observations on the same individual, or one observation on each of two similar individuals, result in paired data. When paired data result from measuring the same quantitative variable twice, as in the job satisfaction study, we can make comparisons by analyzing the differences in each pair. If the conditions for inference are met, we can use one-sample t procedures to perform inference about the mean difference µd. These methods are sometimes called paired t procedures.
Statistically significant at letter α
If the P-value is smaller than alpha, we say that the data are statistically significant at level α. In that case, we reject the null hypothesis H0 and conclude that there is convincing evidence in favor of the alternative hypothesis Ha. When we use a fixed level of significance to draw a conclusion in a significance test, P-value < α → reject H0 → convincing evidence for Ha P-value ≥ α → fail to reject H0 → not convincing evidence for Ha
Type II error
If we fail to reject H0 when Ha is true, we have committed a Type II error.
Type I error
If we reject H0 when H0 is true, we have committed a Type I error.
Conditions For Performing A Significance Test About A Mean
In Chapter 8, we introduced conditions that should be met be- fore we construct a confidence interval for a population mean: Random, 10% when sampling without replacement, and Normal/Large Sample. These same three conditions must be verified before performing a significance test about a population mean. Conditions For Performing A Significance Test About A Mean Random: The data come from a well-designed random sample or randomized experiment. 10%: When sampling without replacement, check that n ≤ (1/10)N. Normal/Large Sample: The population has a Normal distribution or the sample size is large (n ≥ 30). If the population distribution has unknown shape and n < 30, use a graph of the sample data to assess the Normality of the population. Do not use t procedures if the graph shows strong skewness or outliers.
Carrying Out a Significance Test for µ
In an earlier example, a company claimed to have developed a new AAA battery that lasts longer than its regular AAA batteries. Based on years of experience, the company knows that its regular AAA batteries last for 30 hours of continuous use, on average. An SRS of 15 new batteries lasted an average of 33.9 hours with a standard deviation of 9.8 hours. Do these data give convincing evidence that the new batteries last longer on average? To find out, we must perform a significance test of H0: µ = 30 hours Ha: µ > 30 hours where µ = the true mean lifetime of the new deluxe AAA batteries.
Stating a hypothesis
In any significance test, the null hypothesis has the form H0 : parameter= value The alternative hypothesis has one of the forms Ha parameter < value Ha parameter > value Ha parameter ≠ value To determine the correct form of Ha, read the problem carefully.
Two-sided alternative hypothesis
It is two-sided if it states that the parameter is different from the null hypothesis value (it could be either larger or smaller).
Conditions For Performing A Significance Test About A Proportion
Random: The data come from a well-designed random sample or randomized experiment. 10%: When sampling without replacement, check that n ≤ (1/10)N. Large Counts: Both np0 and n(1 − p0) are at least 10.
Significance Tests: A Four-Step Process
State: What hypotheses do you want to test, and at what significance level? Define any parameters you use. Plan: Choose the appropriate inference method. Check conditions. Do: If the conditions are met, perform calculations. Compute the test statistic. Find the P-value. Conclude: Make a decision about the hypotheses in the context of the problem.
Using Tests Wisely (2)
Statistical Significance and Practical Importance When a null hypothesis ("no effect" or "no difference") can be rejected at the usual levels (α = 0.05 or α = 0.01), there is good evidence of a difference. But that difference may be very small. When large samples are available, even tiny deviations from the null hypothesis will be significant. Beware of Multiple Analyses Statistical significance ought to mean that you have found a difference that you were looking for. The reasoning behind statistical significance works well if you decide what difference you are seeking, design a study to search for it, and use a significance test to weigh the evidence you get. In other settings, significance may have little meaning.
Using Table B Wisely
Table B gives a range of possible P-values for a significance. We can still draw a conclusion from the test in much the same way as if we had a single probability by comparing the range of possible P-values to our desired significance level. Table B includes probabilities only for t distributions with degrees of freedom from 1 to 30 and then skips to df = 40, 50, 60, 80, 100, and 1000. (The bottom row gives probabilities for df = ∞, which corresponds to the standard Normal curve.) Note: If the df you need isn't provided in Table B, use the next lower df that is available. Table B shows probabilities only for positive values of t. To find a P-value for a negative value of t, we use the symmetry of the t distributions.
Two-Sided Tests
The P-value in a one-sided test is the area in one tail of a standard Normal distribution—the tail specified by Ha. In a two-sided test, the alternative hypothesis has the form Ha : p ≠p0. The P-value in such a test is the probability of getting a sample proportion as far as or farther from p0 in either direction than the observed value of p-hat. As a result, you have to find the area in both tails of a standard Normal distribution to get the P-value.
One-sided alternative hypothesis
The alternative hypothesis is one-sided if it states that a parameter is larger than the null hypothesis value or if it states that the parameter is smaller than the null value.
Alternative hypothesis
The claim about the population that we are trying to find evidence for is the alternative hypothesis Ha.
Null hypothesis
The claim we weigh evidence against in a statistical test is called the null hypothesis H0. Often the null hypothesis is a statement of "no difference."
Two-Sided Tests and Confidence Intervals
The connection between two-sided tests and confidence intervals is even stronger for means than it was for proportions. That's because both inference methods for means use the standard error of the sample mean in the calculations. Test statistic: t= xbar-µ0/sx/√n Confidence interval: xbar +/- t*sx/√n A two-sided test at significance level α (say, α = 0.05) and a 100(1 - α)% confidence interval (a 95% confidence interval if α = 0.05) give similar information about the population parameter. When the two-sided significance test at level α rejects H0: µ = µ0, the 100(1 - α)% confidence interval for µ will not contain the hypothesized value µ0 . When the two-sided significance test at level α fails to reject the null hypothesis, the confidence interval for µ will contain µ0 .
Statistical significance
The final step in performing a significance test is to draw a conclusion about the competing claims you were testing. We make one of two decisions based on the strength of the evidence against the null hypothesis (and in favor of the alternative hypothesis): reject H0 or fail to reject H0. Note: A fail-to-reject H0 decision in a significance test doesn't mean that H0 is true. For that reason, you should never "accept H0" or use language implying that you believe H0 is true. In a nutshell, our conclusion in a significance test comes down to P-value small → reject H0 → convincing evidence for Ha P-value large → fail to reject H0 → not convincing evidence for Ha
Stating hypotheses
The hypotheses should express the hopes or suspicions we have before we see the data. It is cheating to look at the data first and then frame hypotheses to fit what the data show. Hypotheses always refer to a population, not to a sample. Be sure to state H0 and Ha in terms of population parameters. It is never correct to write a hypothesis about a sample statistic, such as phat= 0.64 or xbar= 85
P-value
The null hypothesis H0 states the claim that we are seeking evidence against. The probability that measures the strength of the evidence against a null hypothesis is called a P-value. The probability, computed assuming H0 is true, that the statistic would take a value as extreme as or more extreme than the one actually observed is called the P-value of the test. Small P-values are evidence against H0 because they say that the observed result is unlikely to occur when H0 is true. Large P-values fail to give convincing evidence against H0 because they say that the observed result is likely to occur by chance when H0 is true.
Power of a test
The power of a test against a specific alternative is the probability that the test will reject H0 at a chosen significance level α when the specified alternative value of the parameter is true.
Power and Type II Error
The power of a test against any alternative is 1 minus the probability of a Type II error for that alternative; that is, power = 1 − β.
Why Confidence Intervals Give More Information
The result of a significance test is basically a decision to reject H0 or fail to reject H0. When we reject H0, we're left wondering what the actual proportion p might be. A confidence interval might shed some light on this issue. There is a link between confidence intervals and two-sided tests. The 95% confidence interval gives an approximate range of p0's that would not be rejected by a two-sided test at the α = 0.05 significance level. A two-sided test at significance level α (say, α = 0.05) and a 100(1 -α)% confidence interval (a 95% confidence interval if α = 0.05) give similar information about the population parameter.
Type I and Type II errors
The significance level α of any fixed-level test is the probability of a Type I error. That is, α is the probability that the test will reject the null hypothesis H0 when H0 is actually true. Consider the consequences of a Type I error before choosing a significance level.
Significance level
There is no rule for how small a P-value we should require in order to reject H0. But we can compare the P-value with a fixed value that we regard as decisive, called the significance level. We write it as α, the Greek letter alpha.
Carrying Out a Significance Test for µ slide 3
When performing a significance test, we do calculations assuming that the null hypothesis H0 is true. The test statistic measures how far the sample result diverges from the parameter value specified by H0, in standardized units. test statistic- parameter/ SD of statistic For a test of H0: µ = µ0, our statistic is the sample mean. Its standard deviation is σx= σ/√n Because the population standard deviation σ is usually unknown, we use the sample standard deviation sx in its place. The resulting test statistic has the standard error of the sample mean in the denominator t= xbar-µ0/sx/√n When the Normal condition is met, this statistic has a t distribution with n - 1 degrees of freedom.-
The One-Sample t-Test
When the conditions are met, we can test a claim about a population mean µ using a one-sample t test. One Sample t-Test for a Mean Choose an SRS of size n from a large population that contains an unknown mean µ. To test the hypothesis H0 : µ = µ0, compute the one-sample t statistic Find the P-value by calculating the probability of getting a t statistic this large or larger in the direction specified by the alternative hypothesis Ha in a t-distribution with df = n - 1 t= xbar-µ0/sx/√n
The One-Sample z-Test for a Proportion
When the conditions are met—Random, 10%, and Large Counts, the sampling distribution of phat is approx. Normal with mean mu of phat= p and σ= √p(1-p)/n The z statistic has approximately the standard Normal distribution when H0 is true. P-values therefore come from the standard Normal distribution
Significance test
a formal procedure for comparing observed data with a claim (also called a hypothesis) whose truth we want to assess. The claim is a statement about a parameter, like the population proportion p or the population mean µ. We express the results of a significance test in terms of a probability that measures how well the data and the claim agree.
Test statistic
measures how far a sample statistic diverges from what we would expect if the null hypothesis H0 were true, in standardized units. That is,