MA 180: Statistics Chapter 7-9

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Finding the Point Estimate and E from a Confidence Interval

Point estimate of p: p ^ = ( upper confidence interval limit ) + ( lower confidence interval limit ) divided by 2 Margin of error: E = ( upper confidence interval limit ) − ( lower confidence interval limit ) divided by 2

test statistic

is a value used in making a decision about the null hypothesis. It is found by converting the sample statistic (such as the sample proportion p ^ , the sample mean x ¯ , or the sample standard deviation s) to a score (such as z, t, or χ 2 ) with the assumption that the null hypothesis is true.

Common Critical value, z a/2 90% 95% 99%

1.645 1.96 2.575

Corresponding Values of a a=0.10 a=0.05 a=0.01 one tailed test

80% (.80) 90% (.90) 99% (.99)

critical value

A critical value is the number on the borderline separating sample statistics that are likely to occur from those that are unlikely. The number z α / 2 is a critical value that is a z score with the property that it separates an area of α / 2 in the right tail of the standard normal distribution (as in Figure 7-2).

point estimate

A point estimate is a single value (or point) used to approximate a population parameter.

Most common confidence levels 90% (0.90) 95% (0.95) 99% (0.99) Two-tailed test

Corresponding Values of a a=0.10 a=0.05 a=0.01

Rationale for the Test Statistic and Confidence Interval

If the given assumptions are satisfied, the sampling distribution of x ¯ 1 − x ¯ 2 can be approximated by a t distribution with mean equal to μ 1 − μ 2 and standard deviation equal to s 1 2 / n 1 + s 2 2 / n 2 .

Why Not Eliminate the Method of Pooling Sample Variances?

If we use randomness to assign subjects to treatment and placebo groups, we know that the samples are drawn from the same population. So if we conduct a hypothesis test assuming that two population means are equal, it is not unreasonable to also assume that the samples are from populations with the same standard deviations (but we should still check that assumption). The advantage of this alternative method of pooling sample variances is that the number of degrees of freedom is a little higher, so hypothesis tests have more power and confidence intervals are a little narrower. Consequently, statisticians sometimes use this method of pooling, and that is why we include it in this subsection.

hypothesis test (or test of significance)

In statistics, a hypothesis is a claim or statement about a property of a population. A hypothesis test (or test of significance) is a procedure for testing a claim about a property of a population.

χ 2 Requirements: Parameters: St. dev. σ or variance σ 2

Strict requirement: normally distributed population

example of dependent samples

Students of the author collected data consisting of the heights (cm) of husbands and the heights (cm) of their wives. Five of those pairs of heights are listed below. These two samples are dependent, because the height of each husband is matched with the height of his wife.

A newspaper provided a​ "snapshot" illustrating poll results from 1910 professionals who interview job applicants. The illustration showed that​ 26% of them said the biggest interview turnoff is that the applicant did not make an effort to learn about the job or the company. The margin of error was given as plus or minus 3 percentage points. What important feature of the poll was​ omitted?

The confidence level

confidence level

The confidence level is the probability 1 − α (such as 0.95, or 95%) that the confidence interval actually does contain the population parameter, assuming that the estimation process is repeated a large number of times. (The confidence level is also called the degree of confidence, or the confidence coefficient.)

Count Five

The count five method is a relatively simple alternative to the F test, and it does not require normally distributed populations. (See "A Quick, Compact, Two-Sample Dispersion Test: Count Five," by McGrath and Yeh, American Statistician, Vol. 59, No. 1.) If the two sample sizes are equal, and if one sample has at least five of the largest mean absolute deviations (MAD), then we conclude that its population has a larger variance. See Exercise 19 for the specific procedure.

Type 1 errors

The mistake of rejecting the null hypothesis when it is actually true. The symbol α (alpha) is used to represent the probability of a type I error. α ( alpha ) = probability of a type I error ( the probability of rejecting the null hypothesis when it is true )

Finding the Sample Size Required to Estimate a Population Mean Requirement and Notation

The sample must be a simple random sample. μ = population mean σ = population standard deviation x ¯ = sample mean E = desired margin of error z α / 2 = z score separating an area of α / 2 in the right tail of the standard normal distribution

Sample size n > 30 :

This is a common guideline, but sample sizes of 15 to 30 are adequate if the population appears to have a distribution that is not far from being normal and there are no outliers. For some population distributions that are extremely far from normal, the sample size might need to be larger than 30.

Requirement of Normality or n > 30

This t test is robust against a departure from normality, meaning that the test works reasonably well if the departure from normality is not too extreme. Verify that there are no outliers and that the histogram or dotplot has a shape that is not very far from a normal distribution.

critical value

With the critical value method (or traditional method), we find the critical value(s), which separates the critical region (where we reject the null hypothesis) from the values of the test statistic that do not lead to rejection of the null hypothesis. Critical values depend on the nature of the null hypothesis, the sampling distribution, and the significance level α . example: with α = 0.05 , the critical value is z = 1.645.

critical region

corresponds to the values of the test statistic that cause us to reject the null hypothesis. Depending on the claim being tested, the critical region could be in the two extreme tails, it could be in the left tail, or it could be in the right tail.

Two samples independent

if the sample values from one population are not related to or somehow naturally paired or matched with the sample values from the other population.

Normal (Z) distrubution Requirements: Parameter: Proportion p

n p ≥ 5 and n q ≥ 5 Example: The claim p > 0.5 is a claim about the population proportion p, so use the normal distribution provided that the requirements are satisfied. (With n = 100 , p = 0.5 , and q = 0.5 as in Example 1, n p ≥ 5 and n q ≥ 5 are both true.)

The power of a hypothesis test is

the probability 1 − β of rejecting a false null hypothesis. The value of the power is computed by using a particular significance level α and a particular value of the population parameter that is an alternative to the value assumed true in the null hypothesis.

Normal (Z) distribution Requirements: Parameter: Mean μ

σ known and normally distributed population or σ known and n > 30

T (sampling distribution) Requirements: Parameter: Mean μ

σ not known and normally distributed population or σ not known and n > 30

Make a Decision: Reject H 0 or Fail to Reject H 0 P-value Method: Critical Value Method:

• If P - value ≤ α , reject H 0 . • If P - value > α , fail to reject H 0 . Example: With significance level α = 0.05 and P - value = 0.0548 , we have P - value > α , so fail to reject H 0 . ----------------------------------- • If the test statistic is in the critical region, reject H 0 . • If the test statistic in not in the critical region, fail to reject H 0 . Example: With test statistic z = 1.60 and the critical region from z = 1.645 to infinity, the test statistic does not fall within the critical region, so fail to reject H 0 .

Types of Hypothesis Tests: Two-Tailed, Left-Tailed, Right-Tailed

• Two-tailed test: The critical region is in the two extreme regions (tails) under the curve • Left-tailed test: The critical region is in the extreme left region (tail) under the curve. • Right-tailed test: The critical region is in the extreme right region (tail) under the curve

Type 2 errors

•The mistake of failing to reject the null hypothesis when it is actually false. The symbol β (beta) is used to represent the probability of a type II error. β ( beta ) = probability of a type II error ( the probability of failing to reject a null hypothesis when it is false )

Properties of the Chi-Square Distribution

1. All values of χ 2 are nonnegative, and the distribution is not symmetric (see Figure 8-9). 2. There is a different χ 2 distribution for each number of degrees of freedom (see Figure 8-10). 3. The critical values are found in Table A-4 using degrees of freedom = n − 1

Try to avoid these two common errors when calculating sample size:

1. Don't make the mistake of using E = 3 as the margin of error corresponding to "three percentage points." If the margin of error is three percentage points, use E = 0.03 . 2. Be sure to substitute the critical z score for z α / 2 . For example, if you are working with 95% confidence, be sure to replace z α / 2 with 1.96 . Don't make the mistake of replacing z α / 2 with 0.95 or 0.05 .

Important Properties of the Student t Distribution

1. The Student t distribution is different for different sample sizes (see Figure 7-5 in Section 7-3). 2. The Student t distribution has the same general bell shape as the standard normal distribution; its wider shape reflects the greater variability that is expected when s is used to estimate σ . 3. The Student t distribution has a mean of t = 0 (just as the standard normal distribution has a mean of z = 0 ). 4. The standard deviation of the Student t distribution varies with the sample size and is greater than 1 (unlike the standard normal distribution, which has σ = 1 ). 5. As the sample size n gets larger, the Student t distribution gets closer to the standard normal distribution.

Properties of the Chi-Square Distribution

1. The chi-square distribution is not symmetric, unlike the normal and Student t distributions (see Figure 7-7). (As the number of degrees of freedom increases, the distribution becomes more symmetric, as Figure 7-8 illustrates.) 2. The values of chi-square can be zero or positive, but they cannot be negative (as shown in Figure 7-7). 3. The chi-square distribution is different for each number of degrees of freedom (as illustrated in Figure 7-8). As the number of degrees of freedom increases, the chi-square distribution approaches a normal distribution.

Confidence Interval for Estimating a Population Mean with σ Known Requirements

1. The sample is a simple random sample. 2. Either or both of these conditions is satisfied: The population is normally distributed or n > 30 .

Confidence Interval for Estimating a Population Mean with σ Not Known Requirements & notations

1. The sample is a simple random sample. 2. Either or both of these conditions is satisfied: The population is normally distributed or n > 30 . μ = population mean x ¯ = sample mean n = number of sample values E = margin of error

Confidence Interval for Estimating a Population Standard Deviation or Variance Requirements and notation

1. The sample is a simple random sample. 2. The population must have normally distributed values (even if the sample is large). The requirement of a normal distribution is much stricter here than in earlier sections, so departures from normal distributions can result in large errors. notation: σ = population standard deviation s = sample standard deviation n = number of sample values χ L 2 = left-tailed critical value of χ 2 σ 2 = population variance s 2 = sample variance E = margin of error χ R 2 = right-tailed critical value of χ 2

Confidence Interval for Estimating a Population Proportion p Requirements

1. The sample is a simple random sample. (Caution: If the sample data have been obtained in a way that is not suitable, the estimate of the population proportion may be very wrong.) 2. The conditions for the binomial distribution are satisfied. That is, there is a fixed number of trials, the trials are independent, there are two categories of outcomes, and the probabilities remain constant for each trial. (See Section 5-3.) 3. There are at least 5 successes and at least 5 failures. (With the population proportions p and q unknown, we estimate their values using the sample proportion, so this requirement is a way of verifying that n p ≥ 5 and n q ≥ 5 are both satisfied, so the normal distribution is a suitable approximation to the binomial distribution. There are procedures for dealing with situations in which the normal distribution is not a suitable approximation, as in Exercise 40.)

Pooled Sample Proportion Requirements

1. The sample proportions are from two simple random samples that are independent. (Samples are independent if the sample values selected from one population are not related to or somehow naturally paired or matched with the sample values selected from the other population.) 2. For each of the two samples, there are at least 5 successes and at least 5 failures. (That is, n p ^ ≥ 5 and n q ^ ≥ 5 for each of the two samples).

Pooled Sample Proportion: The pooled sample proportion is denoted by p ¯ and is given by p ¯ = x 1 + x 2 n 1 + n 2 q ¯ = 1 − p ¯ Requirements:

1. The sample proportions are from two simple random samples that are independent. (Samples are independent if the sample values selected from one population are not related to or somehow naturally paired or matched with the sample values selected from the other population.) 2. For each of the two samples, there are at least 5 successes and at least 5 failures. (That is, n p ^ ≥ 5 and n q ^ ≥ 5 for each of the two samples).

Inferences About Means of Two Independent Populations, Assuming That σ 1 = σ 2 Requirements

1. The two population standard deviations are not known, but they are assumed to be equal. That is, σ 1 = σ 2 . 2. The two samples are independent. 3. Both samples are simple random samples. 4. Either or both of these conditions is satisfied: The two sample sizes are both large (with n 1 > 30 and n 2 > 30 ) or both samples come from populations having normal distributions. (For small samples, the normality requirement is loose in the sense that the procedures perform well as long as there are no outliers and departures from normality are not too extreme.)

Inferences About Means of Two Independent Populations, with σ 1 and σ 2 Known Requirements:

1. The two population standard deviations σ 1 and σ 2 are both known. 2. The two samples are independent. 3. Both samples are simple random samples. 4. Either or both of these conditions is satisfied: The two sample sizes are both large (with n 1 > 30 and n 2 > 30 ) or both samples come from populations having normal distributions. (For small samples, the normality requirement is loose in the sense that the procedures perform well as long as there are no outliers and departures from normality are not too extreme.)

Two variances or standard deviation Requirements:

1. The two populations are independent. 2. The two samples are simple random samples. 3. Each of the two populations must be normally distributed, regardless of their sample sizes. This F test is not robust against departures from normality, so it performs poorly if one or both of the populations has a distribution that is not normal. The requirement of normal distributions is quite strict for this F test.

major activities of inferential statistics

1. Use sample data to estimate values of population parameters (such as a population proportion or population mean). 2. Test hypotheses (or claims) made about population parameters.

Procedures for Inferences with Dependent Samples

1. Verify that the sample data consist of dependent samples (or matched pairs), and verify that the preceding requirements are satisfied. 2. Find the difference d for each pair of sample values. (Caution: Be sure to subtract in a consistent manner.) 3. Find the mean of the differences (denoted by d ¯ ), and find the standard deviation of the differences (denoted by s d ). 4. For hypothesis tests and confidence intervals, use the same t test procedures described in Part 1 of Section 8-5

Cautions for testing a claim about two population proportions:

1. When testing a claim about two population proportions, the P-value method and the critical value method are equivalent, but they are not equivalent to the confidence interval method. If you want to test a claim about two population proportions, use the P-value method or critical value method; if you want to estimate the difference between two population proportions, use a confidence interval. 2. Don't test for equality of two population proportions by determining whether there is an overlap between two individual confidence interval estimates of the two individual population proportions. When compared to the confidence interval estimate of p 1 − p 2 , the analysis of overlap between two individual confidence intervals is more conservative (by rejecting equality less often), and it has less power (because it is less likely to reject p 1 − p 2 when in reality p 1 ≠ p 2 ). (See "On Judging the Significance of Differences by Examining the Overlap Between Confidence Intervals," by Schenker and Gentleman, American Statistician, Vol. 55, No. 3.) See Exercise19.

Confidence Interval Method for Hypothesis Testing

A confidence interval estimate of a population parameter contains the likely values of that parameter. If a confidence interval does not include a claimed value of a population parameter, reject that claim. For two-tailed hypothesis tests, construct a confidence interval with a confidence level of 1 − α , but for a one-tailed hypothesis test with significance level α , construct a confidence interval with a confidence level of 1 − 2 α . After constructing the confidence interval, use this criterion: *A confidence interval estimate of a population parameter contains the likely values of that parameter. We should therefore reject a claim that the population parameter has a value that is not included in the confidence interval*.

alternative hypothesis

Statement that the parameter has a value that somehow differs from the null hypothesis. For the methods of this chapter, the symbolic form of the alternative hypothesis must use one of these symbols: < , > , ≠ .

Condition: Original claim does not include equality, and you reject H 0 . Original claim does not include equality, and you fail to reject H 0 . Original claim includes equality, and you reject H 0 . Original claim includes equality, and you fail to reject H 0 .

Conclusion: "There is sufficient evidence to support the claim that ... (original claim)." "There is not sufficient evidence to support the claim that ... (original claim)." "There is sufficient evidence to warrant rejection of the claim that ... (original claim)." "There is not sufficient evidence to warrant rejection of the claim that ... (original claim)."

Alternative Method: Assume That σ 1 = σ 2 and Pool the Sample Variances

Even when the specific values of σ 1 and σ 2 are not known, if it can be assumed that they have the same value, the sample variances s 1 2 and s 2 2 can be pooled to obtain an estimate of the common population variance σ 2 . The pooled estimate of σ 2 is denoted by s p 2 and is a weighted average of s 1 2 and s 2 2 , which is described in the following box.

P-Value (or p-Value or Probability Value) Method

Find the P-value, which is the probability of getting a value of the test statistic that is at least as extreme as the one representing the sample data, assuming that the null hypothesis is true. To find the P-value, first find the area beyond the test statistic, then use the procedure given in Figure 8-4. That procedure can be summarized as follows: Critical region in the left tail : P - value = area to the left of the test statistic Critical region in the right tail : P - value = area to the right of the test statistic Critical region in two tails : P - value = twice the area in the tail beyond the test statistic

Why do We Need Confidence Intervals?

In Example 1 we saw that 0.85 was our best point estimate of the population proportion p, but a point estimate is a single value that gives us no indication of how good that best estimate is. Statisticians have cleverly developed the confidence interval or interval estimate, which consists of a range (or an interval) of values instead of just a single value. A confidence interval gives us a much better sense of how good an estimate is.

chi-square distribution

In a normally distributed population with variance σ 2 , if we randomly select independent samples of size n and, for each sample, compute the sample variance s 2 (which is the square of the sample standard deviation s), the sample statistic χ 2 = ( n − 1 ) s 2 / σ 2 has a sampling distribution called the chi-square distribution, so Formula 7-5 shows the format of the sample statistic.

2 examples of independent samples:

Independent Samples: Proctored Tests and Nonproctored Tests in Online Courses Researchers Michael Flesch and Elliot Ostler investigated the reliability of test assessment. One group consisted of 30 students who took proctored tests. A second group consisted of 32 students who took tests online without a proctor. The two samples are independent, because the subjects were not paired or matched in any way. Independent Samples: Weights of M&Ms Data Set 20 in Appendix B includes the following weights (grams) of a sample of yellow M&Ms and a sample of brown M&Ms. The yellow and brown weights might appear to be paired because of the way that they are listed, but they are not matched according to some inherent relationship. They are actually two independent samples that just happen to be listed in a way that might cause us to incorrectly think that they are matched.

Testing Claims About σ or σ 2

Notation n = sample size s = sample standard deviation s 2 = sample variance σ = claimed value of the population standard deviation σ 2 = claimed value of the population variance Requirements 1. The sample is a simple random sample. 2. The population has a normal distribution. (Instead of being a loose requirement, this test has a fairly strict requirement of a normal distribution.) Test Statistic for Testing a Claim About σ or σ 2 χ 2 = ( n − 1 ) s 2 /σ 2 ( round to three decimal places, as in Table A - 4 )

Testing Claims About a Population Mean (with σ Not Known)

Notation n = sample size x ¯ = sample mean μ x ¯ = population mean ( this value is taken from the claim and is used in the statement of the null hypothesis ) Requirements 1. The sample is a simple random sample. 2. Either or both of these conditions is satisfied: The population is normally distributed or n > 30 .

null hypothesis

Statement that the value of a population parameter (such as proportion, mean, or standard deviation) is equal to some claimed value. (The term null is used to indicate no change or no effect or no difference.) We test the null hypothesis directly in the sense that we assume (or pretend) it is true and reach a conclusion to either reject it or fail to reject it.

Dependent samples (When designing an experiment or planning an observational study, using dependent samples with paired data is generally better than using two independent samples.)

Notation for Dependent Samples d = individual difference between the two values in a single matched pair μ d = mean value of the differences d for the population of all matched pairs of data d ¯ = mean value of the differences d for the paired sample data s d = standard deviation of the differences d for the paired sample data n = number of pairs of sample data ----------------------------- Requirements 1. The sample data are dependent (matched pairs). 2. The samples are simple random samples. 3. Either or both of these conditions is satisfied: The number of pairs of sample data is large ( n > 30 ) or the pairs of values have differences that are from a population having a distribution that is approximately normal. (These methods are robust against departures for normality, so for small samples, the normality requirement is loose in the sense that the procedures perform well as long as there are no outliers and departures from normality are not too extreme.)

difference between P- value and Proportion P

P - value = probability of a test statistic at least as extreme as the one obtained p = population proportion

hypothesis test and a confidence interval estimate of the difference between the population means. Notation: For population 1 we let μ 1 = population mean σ 1 = population standard deviation n 1 = size of the first sample x ¯ 1 = sample mean s 1 = sample standard deviation The corresponding notations μ 2 , σ 2 , x ¯ 2 , s 2 , and n 2 apply to population 2. Requirements:

Requirements 1. The values of σ 1 and σ 2 are unknown and we do not assume that they are equal. 2. The two samples are independent. 3. Both samples are simple random samples. 4. Either or both of these conditions is satisfied: The two sample sizes are both large (with n 1 > 30 and n 2 > 30 ) or both samples come from populations having normal distributions. (The methods used here are robust against departures from normality, so for small samples, the normality requirement is loose in the sense that the procedures perform well as long as there are no outliers and departures from normality are not too extreme.)

Levene-Brown-Forsythe Test

The Levene-Brown-Forsythe test (or modified Levene's test) is another alternative to the F test, and it is much more robust against departures from normality. This test begins with a transformation of each set of sample values. Within the first sample, replace each x value with | x − median | , and do the same for the second sample. Using the transformed values, conduct a t test of equality of means for independent samples, as described in Part 1 of Section 9-3. Because the transformed values are now deviations, the t test for equality of means is actually a test comparing variation in the two samples. See Exercise 20.

Criteria for deciding whether the population is normally distributed:

The normality requirement is loose, so the distribution should appear to be somewhat symmetric with one mode and no outliers.

The sample proportion p ^ is the best point estimate of the population proportion p .

We use p ^ as the point estimate of p because it is unbiased and it is the most consistent of the estimators that could be used. (Unbiased estimators are discussed in Section 6-4.) The sample proportion p ^ is the most consistent estimator of p in the sense that the standard deviation of sample proportions tends to be smaller than the standard deviation of other unbiased estimators of p.

Confidence intervals can be used informally to compare different data sets,

but the overlapping of confidence intervals should not be used for making formal and final conclusions about equality of proportions (Know that in this chapter, when we use a confidence interval to address a claim about a population proportion p, we simply make an informal judgment (that may or may not be consistent with formal methods of hypothesis testing introduced in Chapter 8).

Two samples are dependent

if the sample values are somehow matched, where the matching is based on some inherent relationship. (That is, each pair of sample values consists of two measurements from the same subject—such as before/after data—or each pair of sample values consists of matched pairs—such as husband/wife data—where the matching is based on some meaningful relationship.)

confidence interval

is a range (or an interval) of values used to estimate the true value of a population parameter. A confidence interval is sometimes abbreviated as CI.

Margin of Error

is the maximum likely difference (with probability 1 − α , such as 0.95) between the observed sample proportion p ^ and the true value of the population proportion p. The margin of error E is also called the maximum error of the estimate and can be found by multiplying the critical value and the standard deviation of sample proportions, Formula 7-1 E = z α / 2 p ^ q ^ n margin of error for proportions For a 95% confidence level, α = 0.05 , so there is a probability of 0.05 that the sample proportion will be in error by more than E.

The significance level α

is the probability of making the mistake of rejecting the null hypothesis when it is true. This is the same α introduced in Section 7-2, where we defined the confidence level for a confidence interval to be the probability 1 − α . Common choices for α are 0.05, 0.01, and 0.10, with 0.05 being most common.

Is a confidence interval equivalent to a hypothesis test in the sense that they always lead to the same conclusion? Proportion Mean Standard Deviation or Variance

no yes yes

Confidence Interval for Estimating a Population Proportion p Notation

p = population proportion p ^ = sample proportion n = number of sample values E = margin of error z α / 2 = z score separating an area of α / 2 in the right tail of the standard normal distribution (Note: The symbol π is sometimes used to denote the population proportion. Because π is so closely associated with the value of 3.14159..., this text uses p to denote the population proportion.)


Ensembles d'études connexes

Chapter 6: Reporting and Analyzing Inventory

View Set

ACCOUNTING CH3: The accounting Cycle: End if the period

View Set

PrepU - Ch.32 Skin Integrity & Wound Care

View Set

RN Tissue Integrity: Wound Evisceration 3.0 Case Study Test

View Set