Statistics Midterm #3

¡Supera tus tareas y exámenes ahora con Quizwiz!

For a particular population proportion p

For a particular population proportion p, the variability in the sampling distribution decreases as the sample size n becomes larger. This will likely align with your intuition: an estimate based on a larger sample size will tend to be more accurate.

For a particular sample size,

For a particular sample size, the variability will be largest when p = 0.5. The differences may be a little subtle, so take a close look. This reflects the role of the proportion p in the standard p(1−p)error formula: SE = sqrt p(1-p)/n. The standard error is largest when p = 0.5

Rejecting the null hypothesis

Hypothesis testing is built around rejecting or failing to reject the null hypothesis. That is, we do not reject H0 unless we have strong evidence. But what precisely does strong evidence mean? As a general rule of thumb, for those cases where the null hypothesis is actually true, we do not want to incorrectly reject H0 more than 5% of the time. This corresponds to a significance level of 0.05. That is, if the null hypothesis is true, the significance level indicates how often the data lead us to incorrectly reject H0.

margin of error

In a confidence interval, z X SE^p is called the margin of error

double negatives

In many statistical explanations, we use double negatives. For instance, we might say that the null hypothesis is not implausible or we failed to reject the null hypothesis. Double negatives are used to communicate that while we are not rejecting a position, we are also not saying it is correct.

computing the standard error of x̅, using the sample standard deviation s in place of σ:

SE = σ/√n = s/√n

Hypothesis tests for the difference of two means

Set up appropriate hypotheses to evaluate whether there is a relationship between the variables. The null hypothesis represents the case of no difference between the groups.

How to verify sample observations are independent

Subjects in an experiment are considered independent if they undergo random assignment to the treatment groups. If the observations are from a simple random sample, then they are independent. If a sample is from a seemingly random process, e.g. an occasional error on an assembly line, checking independence is more difficult. In this case, use your best judgement.

bias

describes a systematic tendency to over- or under-estimate the true population value

Sampling error/sampling uncertainty

describes how much an estimate will tend to vary from one sample to the next. Much of statistics is focused on understanding and quantifying sampling error, and we will find it useful to consider a sample's size to help us quantify this error

Hypothesis testing for a single mean

*Because the p-value is smaller than 0.05, we reject the null hypothesis* Once you've determined a one-mean hypothesis test is the correct procedure, there are four steps to completing the test: 1. Prepare: Identify the parameter of interest, list out hypotheses, identify the significance level, and identify , s, and n. 2. Check: Verify conditions to ensure x̅ is nearly normal. 3. Calculate: If the conditions hold, compute SE, compute the T-score, and identify the p-value. 4. Conclude: Evaluate the hypothesis test by comparing the p-value to α, and provide a conclusion in the context of the problem.

hypotheses

-Hypotheses are opposing claims about your population with no predetermined outcome- *H0: is the null hypothesis that says there is no statistical significance between two variables. It is the hypothesis of research or experiment that we try to disprove or discredit.* People never learn these particular topics and their responses are simply equivalent to random guesses. *A null hypothesis must include an equality sign* *HA: the alternative hypothesis is one that states there is a statistically significant relationship between two variables.* People have knowledge that helps them do better than random guessing, or perhaps, they have false knowledge that leads them to actually do worse than random guessing. *An alternative hypothesis must be an inequality* These competing ideas are called *hypotheses*. We call H0 the null hypothesis and HA the alternative hypothesis.

When to use pool standard deviations

1. A pooled standard deviation is only appropriate when background research indicates the pop- ulation standard deviations are nearly equal. When the sample size is large and the condition may be adequately checked with data, the benefits of pooling the standard deviations greatly diminishes. 2. The benefits of pooling the standard deviation are realized through obtaining a better estimate of the standard deviation for each group and using a larger degrees of freedom parameter for the t-distribution. Both of these changes may permit a more accurate model of the sampling distribution of x̅1 − x̅2 , if the standard deviations of the two groups are indeed equal.

sampling distribution of the difference between two proportions

1. Like with pˆ, the difference of two sample proportions pˆ − pˆ can be modeled using a normal 12 distribution when certain conditions are met. First, we require a broader independence condition, and secondly, the success-failure condition must be met by both groups. 2. The difference pˆ − pˆ can be modeled using a normal distribution when (*Conditions for the sampling distribution of ^p1-^p2 to be normal*) • Independence extended. The data are independent within and between the two groups. Generally, this is satisfied if the data come from two independent random samples or if the data come from a randomized experiment. • Success-failure condition. The success-failure condition holds for both groups, where we check successes and failures in each group separately. where p1 and p2 represent the population proportions, and n1 and n2 represent the sample sizes.

Pooled standard deviation estimate

1. Occasionally, two populations will have standard deviations that are so similar that they can be treated as identical. In such cases, we can make the t-distribution approach slightly more precise by using a pooled standard deviation. 2. The pooled standard deviation of two groups is a way to use data from both samples to better estimate the standard deviation and standard error. If s1 and s2 are the standard deviations of groups 1 and 2 and there are very good reasons to believe that the population standard deviations are equal, then we can obtain an improved estimate of the group variances by pooling their data: (picture); where n1 and n2 are the sample sizes, as before. To use this new statistic, we substitute s2pooled in place of s21 and s2 in the standard error formula, and we use an updated formula for the degrees of freedom: df = n1 + n2 − 2

confidence interval for a difference of means

1. Prepare: Retrieve critical contextual information, and if appropriate, set up hypotheses. 2. Check: Ensure the required conditions are reasonably satisfied. 3. Calculate: Find the standard error, and then construct a confidence interval, or if conducting a hypothesis test, find a test statistic and p-value. 4. Conclude: Interpret the results in the context of the application.

null hypothesis vs. alternative hypothesis

1. The null hypothesis (H0): often represents a skeptical perspective or a claim to be tested. The null hypothesis often represents a skeptical position or a perspective of "no difference" 2. The alternative hypothesis (HA): represents an alternative claim under consideration and is often represented by a range of possible parameter values. Before we buy into the alternative hypothesis, we need to see strong supporting evidence. The alternative hypothesis generally represents a new or stronger perspective

two-sided hypothesis tests vs. one-sided hypothesis test

1. two-sided hypothesis tests, where we care about detecting whether p is either above or below some null value p0. 2. one-sided hypothesis test: the hypotheses take one of the following forms: 2a. There's only value in detecting if the population parameter is less than some value p0. In this case, the alternative hypothesis is written as p < p0 for some null value p0. 2b. There's only value in detecting if the population parameter is more than some value p0: In this case, the alternative hypothesis is written as p > p0. *there is only one difference in evaluating a one- sided hypothesis test vs a two-sided hypothesis test: how to compute the p-value* In a one-sided hypothesis test, we compute the p-value as the tail area in the direction of the alternative hypothesis only, meaning it is represented by a single tail area.

confidence interval for proportions

1.A confidence interval provides a range of plausible values for the parameter p, and when pˆ can be modeled using a normal distribution, the confidence interval for p takes the form pˆ ± z⋆ × S E 2.Once you've determined a one-proportion confidence interval would be helpful for an application, there are four steps to constructing the interval: a) Prepare: Identify pˆ and n, and determine what confidence level you wish to use. b) Check: Verify the conditions to ensure pˆ is nearly normal. For one-proportion confidence intervals, use pˆ in place of p to check the success-failure condition. c)Calculate: If the conditions hold, compute SE using pˆ, find z⋆, and construct the interval. d)Conclude: Interpret the confidence interval in the context of the problem.

sampling distribution of pˆ

1.A distribution of the sample proportion; a list of all the possible values for pˆ together with the frequency (or probability) of each value. 2.A sample proportion pˆ can be modeled using a normal distribution when the sample observations are independent and the sample size is sufficiently large. 3.The sampling distribution for pˆ based on a sample of size n from a population with a true proportion p is nearly normal when: a)The sample's observations are independent, e.g. are from a simple random sample. b) We expected to see at least 10 successes and 10 failures in the sample, i.e. np ≥ 10 and n(1 − p) ≥ 10. This is called the success-failure condition. When these conditions are met, then the sampling distribution of pˆ is nearly normal with mean p(1−p) p and standard error SE = sqrt p(1-p)/n 4.For confidence intervals, the sample proportion pˆ is used to check the success-failure condition and compute the standard error. For hypothesis tests, typically the null value - that is, the proportion claimed in the null hypothesis - is used in place of p.

t-distribution

1.A t-distribution has a bell shape. However, its tails are thicker than the normal distribution's, meaning observations are more likely to fall beyond two standard deviations from the mean than under the normal distribution. The extra thick tails of the t-distribution are exactly the correction needed to resolve the problem of using s in place of σ in the SE calculation. Its tails are thicker than normal distributions 2.The t-distribution is always centered at zero and has a single parameter: degrees of freedom. The degrees of freedom (df) describes the precise form of the bell-shaped t-distribution. 3. In general, we'll use a t-distribution with df = n−1 to model the sample mean when the sample size is n. That is, when we have more observations, the degrees of freedom will be larger and the t-distribution will look more like the standard normal distribution; when the degrees of freedom is about 30 or more, the t-distribution is nearly indistinguishable from the normal distribution. *The larger the degrees of freedom, the more closely the t-distribution resembles the standard normal distribution.* 4. The t-distribution allows us greater flexibility than the normal distribution when analyzing numerical data.

hypothesis testing for a proportion

1.To apply the normal distribution framework in the context of a hypothesis test for a proportion, the independence and success-failure conditions must be satisfied. In a hypothesis test, the success-failure condition is checked using the null proportion: we verify np0 and n(1 − p0) is at least 10, where p0 is the null value. 2.Once you've determined a one-proportion hypothesis test is a correct procedure, there are four steps to completing the test: a)Prepare: Identify the parameter of interest, list hypotheses, identify the significance level, and identify pˆ and n. b)Check: Verify conditions to ensure pˆ is nearly normal under H0. For one-proportion hypothesis tests, use the null value to check the success-failure condition. c)Calculate: If the conditions hold, compute the standard error, again using p0, compute the Z-score, and identify the p-value. d)Conclude: Evaluate the hypothesis test by comparing the p-value to α, and provide a conclusion in the context of the problem.

Hypothesis testing for two independent population proportions

1.comparing two proportions is common 2.if two estimated proportions are different, it may be due to a difference in the populations or it may be due to chance in the estimated proportions from the samples reflects a difference in the population proportions 3.null hypothesis: generally, the null hypothesis states that the two proportions from the two samples A and B are the same: Ho: PA = PB; Ho: PA-PB = 0; po = 0 4. Alternate hypothesis: there is a difference between the two populations: Ha: PA ≠ PB; Ha: PA - PB ≠ 0

Decision errors

A) Hypothesis tests are not flawless: we can make an incorrect decision in a statistical hypothesis test based on the data. With statistical hypothesis tests is that we have the tools necessary to probabilistically quantify how often we make errors in our conclusions. Recall that there are two competing hypotheses: the null and the alternative. In a hypothesis test, we make a statement about which one might be true, but we might choose incorrectly. B) In every hypothesis test, the outcomes are dependent on a correct interpretation of the date. Incorrect calculations or misunderstood summary statistics can yield errors that affect the results 1. A Type 1 Error is rejecting the null hypothesis when H0 is actually true. 2. A Type 2 Error is failing to reject the null hypothesis when the alternative is actually true.

Finding a t-confidence interval for the mean

Based on a sample of n independent and nearly normal observations, a confidence interval for the population mean is: (picture) where x̅ is the sample mean, t⋆df corresponds to the confidence level and degrees of freedom df, and SE is the standard error as estimated by the sample.

Difference of two population means

Consider a difference in two population means, μ1 − μ2, under the condition that the data are not paired. Just as with a single sample, we identify conditions to ensure we can use the t-distribution with a point estimate of the difference, x̅1 − x̅2, and new standard error formula. Other than these two differences, the details are almost identical to the one-mean procedures. 1.Difference is μA-μB Ho=μA=μB Ha: μA≠μB 2.If two estimated proportions are different, it may be due to a difference in the populations or it may be due to chance

The strength of evidence

Data scientists are sometimes called upon to evaluate the strength of evidence. When looking at the rates of infection for patients in the two groups in this study, what comes to mind as we try to determine whether the data show convincing evidence of a real difference? The observed infection rates (35.7% for the treatment group versus 100% for the control group) suggest the vaccine may be effective. However, we cannot be sure if the observed difference represents the vaccine's efficacy or is just from random chance. Generally there is a little bit of fluctuation in sample data, and we wouldn't expect the sample proportions to be exactly equal, even if the truth was that the infection rates were independent of getting the vaccine. Additionally, with such small samples, perhaps it's common to observe such large differences when we randomly split a group due to chance alone! While the observed difference in rates of infection is large, the sample size for the study is small, making it unclear if this observed difference represents efficacy of the vaccine or whether it is simply due to chance.

the chi-square test statistic

In previous hypothesis tests, we constructed a test statistic of the following form: *point estimate − null value/SE of point estimate* This construction was based on (1) identifying the difference between a point estimate and an expected value if the null hypothesis was true, and (2) standardizing that difference using the standard error of the point estimate. These two ideas will help in the construction of an appropriate test statistic for count data. We would like to use a single test statistic to determine if these four standardized differences are irregularly far from zero. That is, Z1, Z2, Z3, and Z4 must be combined somehow to help determine if they - as a group - tend to be unusually far from zero. A first thought might be to take the absolute value of these four standardized differences and add them up. However, it is more common to add the squared values. Squaring each standardized difference before adding them together does two things: • Any standardized difference that is squared will now be positive. • Differences that already look unusual - e.g. a standardized difference of 2.5 - will become much larger after being squared.

Two conditions are required to apply the Central Limit Theorem for a sample mean x̅

Independence: The sample observations must be independent, The most common way to satisfy this condition is when the sample is a simple random sample from the population. If the data come from a random process, analogous to rolling a die, this would also satisfy the independence condition. Normality: When a sample is small, we also require that the sample observations come from a normally distributed population. We can relax this condition more and more for larger and larger sample sizes. This condition is obviously vague, making it difficult to evaluate, so next, we introduce a couple of rules of thumb to make checking this condition easier. Two rules of thumb when checking the normality condition: 1. n < 30: If the sample size n is less than 30 and there are no clear outliers in the data, then we typically assume the data come from a nearly normal distribution to satisfy the condition. 2. n ≥ 30: If the sample size n is at least 30 and there are no particularly extreme outliers, then we typically assume the sampling distribution of x̅ is nearly normal, even if the underlying distribution of individual observations is not.

Confidence interval for a single mean

Once you've determined a one-mean confidence interval would be helpful for an application, there are four steps to constructing the interval: 1. Prepare: Identify x̅, s, n, and determine what confidence level you wish to use. 2. Check: Verify the conditions to ensure x̅ is nearly normal 3. Calculate: If the conditions hold, compute SE, find t⋆df , and construct the interval 4. Conclude: Interpret the confidence interval in the context of the problem.

steps to remember: confidence interval for a single proportion

Once you've determined a one-proportion confidence interval would be helpful for an application, there are four steps to constructing the interval: *Prepare*: Identify pˆ and n, and determine what confidence level you wish to use. *Check*: Verify the conditions to ensure pˆ is nearly normal. For one-proportion confidence intervals, use pˆ in place of p to check the success-failure condition. *Calculate*: If the conditions hold, compute SE using pˆ, find z⋆, and construct the interval. *Conclude*: Interpret the confidence interval in the context of the problem.

hypothesis testing for a single proportion

Once you've determined a one-proportion hypothesis test is the correct procedure, there are four steps to completing the test: *Prepare*: Identify the parameter of interest, list hypotheses, identify the significance level, and identify pˆ and n. *Check*: Verify conditions to ensure pˆ is nearly normal under H0. For one-proportion hypothesis tests, use the null value to check the success-failure condition. *Calculate*: If the conditions hold, compute the standard error, again using p0, compute the Z-score, and identify the p-value. *Conclude*: Evaluate the hypothesis test by comparing the p-value to α, and provide a conclusion in the context of the problem.

finding a p-value for a chi-square distribution

Suppose we are to evaluate whether there is convincing evidence that a set of observed counts O1, O2, ..., Ok in k categories are unusually different from what might be expected under a null hypothesis. Call the expected counts that are based on the null hypothesis E1, E2, ..., Ek. If each expected count is at least 5 and the null hypothesis is true, then the test statistic below follows a chi-square distribution with k − 1 degree of freedom. The p-value for this test statistic is found by looking at the upper tail of this chi-square distribution. We consider the upper tail because larger values of X2 would provide greater evidence against the null hypothesis.

unbiased

The centers of the distribution are always at the population proportion, p, that was used to generate the simulation. Because the sampling distribution of pˆ is always centered at the population parameter p, it means the sample proportion pˆ is unbiased when the data are independent and drawn from such a population.

the chi-square distribution and finding areas

The chi-square distribution is sometimes used to characterize data sets and statistics that are always positive and typically right skewed. Recall a normal distribution had two parameters - mean and standard deviation - that could be used to describe its exact characteristics. The chi- square distribution has just one parameter called degrees of freedom (df), which influences the shape, center, and spread of the distribution. Our principal interest in the chi-square distribution is the calculation of p-values, which (as we have seen before) is related to finding the relevant area in the tail of a distribution. The most common ways to do this are using computer software, using a graphing calculator, or using a table.

The hallmarks of hypothesis testing

The hypothesis testing framework is a very general tool, and we often use it without a second thought. If a person makes a somewhat unbelievable claim, we are initially skeptical. However, if there is sufficient evidence that supports the claim, we set aside our skepticism and reject the null hypothesis in favor of the alternative.

Independence (condition for central limit theorem)

The most common way for observations to be considered independent is if they are from a simple random sample. If there is replacement, this also ensures independence.

Testing hypothesis with a confidence interval

The question we ask ourselves is if we construct a confidence level based on our point estimate, will our null value (po) in our hypothesis test fall within the constructed confidence level: 1.If po does fall in the interval, then we cannot reject the null hypothesis as there is enough evidence to reject it 2.Observation: changing our confidence level for the interval may change if the null value po falls within or outside of the interval

Central Limit Theorem

The sampling distribution looks an awful lot like a normal distribution. That is no anomaly; it is the result of a general principle called the Central Limit Theorem. The Central Limit Theorem is incredibly important, and it provides a foundation for much of statistics. When observations are independent and the sample size is sufficiently large, the sample propor-tion pˆ will tend to follow a normal distribution with the following mean and standard error. In order for the Central Limit Theorem to hold, the sample size is typically considered sufficiently large when np ≥ 10 and n(1 − p) ≥ 10, which is called the success-failure condition.

95% confident interval

The standard error provides a guide for how large we should make the confidence interval. The standard error represents the standard deviation of the point estimate, and when the Central Limit Theorem conditions are satisfied, the point estimate closely follows a normal distribution. In a normal distribution, 95% of the data is within 1.96 standard deviations of the mean. Using this principle, we can construct a confidence interval that extends 1.96 standard errors from the sample proportion to be 95% confident that the interval captures the population proportion: point estimate ± 1.96 × SE pˆ ± 1.96× sqrt p(1-p)/n But what does "95% confident" mean? Suppose we took many samples and built a 95% confidence interval from each. Then about 95% of those intervals would contain the parameter, p. The general 95% confidence interval for a point estimate that follows a normal distribution is: point estimate ± 1.96 × SE There are three components to this interval: the point estimate, "1.96", and the standard error. The choice of 1.96 × SE was based on capturing 95% of the data since the estimate is within 1.96 standard errors of the parameter about 95% of the time. The choice of 1.96 corresponds to a 95% confidence level.

t-Distribution conditions for two means

The two independent samples are simple random samples from two distinct populations. The t-distribution can be used for inference when working with the standardized difference of two means if: • Independence, extended: The data are independent within and between the two groups, e.g. the data come from independent random samples or from a randomized experiment. • Normality: We check the outliers rules of thumb for each group separately. For two distinct populations: -If the sample sizes are small, n<30, the distributions are important and are normal--check for outliers -If the sample sizes are large, n≥30, the distributions are not important (need not to be normal) The standard error may be computed as from population: SE: √σ1/n1 + σ2/n2 The standard error from sample distribution: SE: √Sa^2/na + Sb^2/nb you may use the smaller of n1 − 1 and n2 − 1 for the degrees of freedom

null value

The value we are comparing the parameter. It's common to label the null value with the same symbol as the parameter but with a subscript '0'. That is, in this case, the null value is p0

conditions for the chi-square test

There are two conditions that must be checked before performing a chi-square test: 1.Independence. Each case that contributes a count to the table must be independent of all the other cases in the table. 2.Sample size / distribution. Each particular scenario (i.e. cell count) must have at least 5 expected cases. *Failing to check conditions may affect the test's error rates*

confidence intervals for p1-p2

We can apply the generic confidence interval formula for a difference of two proportions, where we use pˆ − pˆ as the point estimate and substitute the SE formula: p^1 -p^2 ± Z* x sqrt p1 (1-p1)/n1 + p2(1-p2)/n2 We can also follow the same Prepare, Check, Calculate, Conclude steps for computing a confidence interval, or completing a hypothesis test. The details change a little, but the general approach remains the same. Think about these steps when you apply statistical methods.

Success-failure condition (condition for central limit theorem)

We can confirm the sample size is sufficiently large by checking the success-failure condition and confirming the two calculated values are greater than 10: n x p ≥ 10 n (1-p) ≥ 10

when one or more conditions are not met

What about when the independence condition fails? In either case, the general ideas of confidence intervals and hypothesis tests remain the same, but the strategy or technique used to generate the interval or p-value change. When the success-failure condition isn't met for a hypothesis test, we can simulate the null distribution of pˆ using the null value, p0. For a confidence interval when the success-failure condition isn't met, we can use what's called the Clopper-Pearson interval.

choosing a sample size when estimating a proportion

When collecting data, we choose a sample size suitable for the purpose of the study. Often times this means choosing a sample size large enough that the margin of error - which is the part we add and subtract from the point estimate in a confidence interval - is sufficiently small that the sample is useful. If 600.25 < n; we would need over 600.25 participants, which means we need 601 participants or more, to ensure the sample proportion is within 0.04 of the true proportion with 95% confidence. When an estimate of the proportion is available, we use it in place of the worst-case proportion value, 0.5. -We want to minimize the margin of error and also account for nxp and nxq being greater than or equal to 10 by having a suitable sample size.

One sample t-tests

When completing a hypothesis test for the one-sample mean, the process is nearly identical to completing a hypothesis test for a single proportion. First, we find the Z-score using the observed value, null value, and standard error; however, we call it a T-score since we use a t-distribution for calculating the tail area. Then we find the p-value using the same ideas we used previously: find the one-tail area under the sampling distribution, and double it.

Criteria for hypothesis testing for two proportions

When conducting a hypothesis test that compares two independent population proportions, the following characteristics should be present: 1.The two independent samples are simple random samples that are independent 2.n x ^p ≥ 10 and n x ^q ≥ 10 for each sample 3.Growing literature states that the population must be at least 10 or 20 times the size of the sample. This keeps each population from being oversampled and causing incorrect results

use the pooled proportions when H0 is p1-p2=0

When the null hypothesis is that the proportions are equal, use the pooled proportion (pˆ ) pooled to verify the success-failure condition and estimate the standard error: pˆ = pooled = number of "successes"/number of cases OR p^1n1 + p^2n2 / n1 + n2

compare the p-value alpha t evaluate H0

When the p-value is less than the significance level, α, reject H0. We would report a conclusion that the data provide strong evidence supporting the alternative hypothesis. When the p-value is greater than α, do not reject H0, and report that we do not have sufficient evidence to reject the null hypothesis. In either case, it is important to describe the conclusion in the context of the data.

statistical significance versus practical significance

When the sample size becomes larger, point estimates become more precise and any real differ- ences in the mean and null value become easier to detect and recognize. Even a very small difference would likely be detected if we took a large enough sample. Sometimes researchers will take such large samples that even the slightest difference is detected, even differences where there is no practical value. In such cases, we still say the difference is statistically significant, but it is not practi- cally significant.

Paired observations

When two sets of observations have this special correspondence, they are said to be paired. Two sets of observations are paired if each observation in one set has a special correspondence or connection with exactly one observation in the other data set. 1. To analyze a paired data set, we simply analyze the differences. We can use the same t- distribution techniques

checking success-failure and computing SEp^ for a hypothesis test

When using the p-value method to evaluate a hypothesis test, we check the conditions for pˆ and construct the standard error using the null value, p0, instead of using the sample proportion. In a hypothesis test with a p-value, we are supposing the null hypothesis is true, which is a different mindset than when we compute a confidence interval. This is why we use p0 instead of pˆ when we check conditions and compute the standard error in this context.

Central limit theorem for the sample mean

When we collect a sufficiently large sample of n independent observations from a population with mean μ and standard deviation σ, the sampling distribution of x̅ will be nearly normal with Mean = μ. Standard Error (SE) = σ/√n

more on 2-proportion hypothesis tests

When we conduct a 2-proportion hypothesis test, usually H0 is p1 − p2 = 0. However, there are rare situations where we want to check for some difference in p1 and p2 that is some value other than 0. For example, maybe we care about checking a null hypothesis where p1 − p2 = 0.1. In contexts like these, we generally check the success-failure condition by constructing the standard error of ^p1-^p2: Group A: n x ^pA ≥ 10 & n x ^qA ≥ 10 Group B: n x ^pB ≥ 10 & n x ^qB ≥ 10 *where p1 and p2 represent the population propotions and n1 and n2 represent the sample sizes

null distribution

When we identify the sampling distribution under the null hypothesis, it has a special name: the null distribution. The p-value represents the probability of the observed pˆ, or a pˆ that is more extreme, if the null hypothesis were true. To find the p-value, we generally find the null distribution, and then we find a tail area in that distribution corresponding to our point estimate.

The sampling distribution of x̅

With sampling distribution take many samples of size n and calculate the statistics from the samples to estimate the parameter from the population. 1. The t-distribution, tends to be more useful when working with the sample mean 2. The sample mean tends to follow a normal distribution centered at the population mean, μ, when certain conditions are met. Additionally, we can compute a standard error for the sample mean using the population standard deviation σ and the sample size n.

point estimate

a summary statistic from a sample that is just one number used as an estimate of the population parameter. It is the percentage of collected responses from the entire population. Use ^p for the point estimate

Significance Level

hypothesis testing is built around rejecting or failing to reject the null hypothesis: We do not reject Ho unless we have strong evidence -Rule of thumb, for those cases where the null hypothesis is actually true, we do not eat to incorrectly reject Ho more than 5% of the time -a significance level (α) = the probability of rejecting the null hypothesis, when it is actually true (Type 1 error) -Significance levels for hypothesis testing are set beforehand commonly: α=0.05, α=0.01, α=0.001

confidence interval for proportions

it represents a range of plausible values where we are likely to find the population parameter. With the confidence interval, we are taking a point estimate and building an interval around it; anything outside of the interval is an outlier. If we report a point estimate pˆ, we probably will not hit the exact population proportion. On the other hand, if we report a range of plausible values, representing a confidence interval, we have a good shot at capturing the parameter. The confidence interval is from our sample and we are going to get a point estimate and standard error. Calculating the z-score when your finding confidence level and the sample ^p with the constructed confidence interval will show the population parameter

substitution approximation

of using pˆ in place of p is also useful when computing the standard error of the sample proportion: SE^p = sqrt p(1-p)/n ≈sqrt ^p (1-^p)/n This substitution technique is sometimes referred to as the "plug-in principle". The computed standard error tends to be reasonably stable even when observing slightly different proportions in one sample or another.

sample size

often represented by the letter n

p-value

p-value is a way of quantifying the strength of the evidence against the null hypothesis and in favor of the alternative hypothesis. Statistical hypothesis testing typically uses the p-value method rather than making a decision based on confidence intervals. The p-value is the probability of observing data at least as favorable to the alternative hypothesis as our current data set if the null hypothesis were true (we can use the area to calculate it). We typically use a summary statistic of the data, in this section the sample proportion, to help compute the p-value and evaluate the hypotheses. Statistical hypothesis testing typically uses the p-value method rather than making a decision based on confidence intervals

paired data

refers to matching data so that: 1. observations share every characteristics except for the one under investigation 2.A common use for paired data is to assign one individual to a treatment group and another to a control group 3. The "pairs" don't have to be different people--they could be the same individuals at different time The purpose of paired data is to get better statitics by controlling for the effects of other "unwanted" variables

sample proportion pˆ

sample proportion pˆ provides a single plausible value for the population proportion p. However, the sample proportion isn't perfect and will have some standard error associated with it. When stating an estimate for the population proportion, it is better practice to provide a plausible range of values instead of supplying just the point estimate.

Errors

the difference we observe from the poll versus the parameter in the estimate. the error consists of two aspects: sampling error and bias. If we reduce how often we make one type of error, we generally make more of the other type alpha (α) = probability of a Type 1 error = value between 0 and 1 beta (β)= probability of a type 2 error = value between 0 and 1 α + β = 1 --> probability of both errors equals to one

sampling distribution

the distribution of values taken by the statistic in all possible samples of the same size from the same population. When we take a lot of samples from a population and calculate the proportion of each, we obtain data points that form a sampling distribution graph. One simulation isn't enough to get a great sense of the distribution of estimates we might expect in the simulation, so we should run more simulations. Take many samples of seeing and calculate the statistics from the samples to estimate the parameter from the population. Interested in the distribution of the sample statistic that we gather = sampling distribution and all the n have the same We've run the simulation 10,000 times and created a histogram of the results from all 10,000 simulations. This distribution of sample proportions is called a sampling distribution. We can characterize this sampling distribution as follows: 1. Center=the center of the distribution which is the same as the parameter or population proportion. Notice pˆthat the simulation mimicked a simple random sample of the population, which is a straight- forward sampling strategy that helps avoid sampling bias. 2.Spread=the variability of a point estimate called the standard error of the distribution. When we're talking about a sampling distribution or the variability of a point estimate, we typically use the term standard error rather than the standard deviation, and the notation SEpˆ is used for the standard error associated with the sample proportion. 3.Shape=the distribution is symmetric and bell-shaped, and it resembles a normal distribution. *SAMPLING DISTRIBUTIONS ARE NEVER OBSERVED, BUT WE KEEP THEM IN MIND: In real-world applications, we never actually observe the sampling distribution, yet it is useful to always think of a point estimate as coming from such a hypothetical distribution. Understanding the sampling distribution will help us characterize and make sense of the point estimates that we do observe*

Tests for paired data

the paired sample t-test compares the means for the two groups to see if there is statistical difference between the two. also called "related measures" t-test or dependent samples t-test *Criteria for using t-test* 1.Simple random sampling is used 2.Two measurements (samples) are drawn from the same pair of individuals or objects 3.Either the matched pairs have differences that come from a population that is normal or the number of differences is sufficiently large so that distribution of the sample mean of differences is approximately normal 4.Matched or paired samples (samples are dependent) *Hypothesis Testing* 1.Becomes a test of one population mean 2.The differences are the data 3.The population mean for the differences, μd 4.n-1 degrees of freedom, where n is the number of differences 5.T-test statistics

testing for goodness of fit using chi-square notation

the technique is commonly used in two circumstances: 1.Given a sample of cases that can be classified into several groups, determine if the sample is representative of the general population. 2.Evaluate whether data resemble a particular distribution, such as a normal distribution or a geometric distribution.

Changing the confidence level

we want to consider confidence intervals where the confidence level is higher than 95%, such as a confidence level of 99%. To create a 99% confidence level, we must also widen our 95% interval. On the other hand, if we want an interval with lower confidence, such as 90%, we could use a slightly narrower interval than our original 95% interval. *Confidence level using any confidence level* If a point estimate closely follows a normal model with standard error SE, then a confidence interval for the population parameter is: point estimate ± z⋆ × SE where z⋆ corresponds to the confidence level selected.

parameter of interest

what we are interested in the population; a determining or characteristic element; a factor that shapes the total outcome; a limit, boundary. This entire-population response proportion is generally referred to as the parameter of interest. When the parameter is a proportion, it is often denoted by p, and we often refer to the sample proportion as pˆ. Unless we collect responses from every individual in the population, p remains unknown, and we use pˆ as our estimate of p.


Conjuntos de estudio relacionados

MKTG Exam 2: Ch 13, 14, 15, 16, 17, 18 ((LOTS OF STUFF LOL))

View Set

Manon des sources - Guide du film

View Set

Economics Unit 3.2: Market Structures

View Set

the fetal head and brain PRACTICE QUIZ

View Set

Liberty University 3.12.4 Test: A Christmas Carol

View Set