Exam 3 (Modules 7-9)

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Hypothesis Testing: What can go wrong?

- Don't base your null hypothesis on what you see in the data - Don't base your alternative hypothesis on the data, either - Don't make your null hypothesis what you want to show to be true - Don't forget to check the conditions - Don't accept the null hypothesis - If you fail to reject the null hypothesis, don't think that a bigger sample would be more likely to lead to rejection

Tests and Intervals: What can go wrong?

- Don't interpret the P-value as the probability that the null hypothesis is true - Don't believe too strongly in arbitrary alpha levels - Don't confuse practical and statistical significance - Don't forget that in spite of all your care, you might make a wrong decision

Confidence Interval Interpretation

- you can claim to have the specified level of confidence that the interval you have computed actually covers the true value - for the same sample size and true population proportion, more certainty means less precision (wider interval) and more precision (narrower interval) implies less certainty - sample sizes are not what matter; it's the response rate that does. Higher response rate= better results

General Steps in Hypothesis Testing

1.) State the hypotheses 2.) Determine (and check assumptions for) the sampling distribution model 3.) Calculate the test statistic- the mechanics 4.) State your conclusions and decisions

From the law of large numbers (LLN), we know:

1.) The sample statistic is unlikely to be exactly equal to the population parameter it is estimating 2.) The sample statistic is likely to be close to the value of the population parameter it is estimating, especially when we have a relatively large, random sample

A sample is chosen randomly from a population that can be described by a normal model. 1.) What is the sampling distribution model for the sample mean? Describe shape, center, and spread. 2.) If we choose a larger sample, what's the effect on this sampling distribution model?

1.) These answers can be found because of the CLT. The shape would be "normal" and would be centered at µ (mean). The spread would be the SD, so ∑÷√n 2.) The standard deviation would be smaller, but the center would be the same

Confidence Interval

A confidence interval communicates a range of possible values for a population parameter. The range is defined so that all values in the range are consistent with the data obtained in our random sample? A level C confidence interval for a model parameter is an interval of values usually of the form: estimate +/- margin of error, found from data in such a way that C% of all random samples will yield intervals that capture the true parameter value

One-proportion z-interval

A confidence interval for the true value of a proportion

How do hypothesis tests and confidence intervals go hand-in-hand in helping us think about models?

A hypothesis test makes a yes/no decision about the plausibility of the value of a parameter value; A confidence interval shows us the range of plausible values for the parameter

T-Test

A t-test (for the t-distribution) compares two averages (means) and tells you if they are different from each other The t-test also tells you how significant the differences are; in other words it lets you know if those differences could have happened by chance (the probability) Every t-value has a p-value, and the p-value is the probability that the results from your sample data occurred by chance P-values are from 0% to 100% and usually written as a decimal. Low p-values are good; they indicate that your data did not occur by chance A one-sample t-test tests the mean of a single group against a known mean

One-proportion z-test

A test of the null hypothesis that the proportion of a single sample equals a specified value by referring to a statistic (z) to a Standard Normal model

One-sided Alternative

An alternative hypothesis is one-sided (> or <) when we are interested in deviations in only one direction away from the hypothesized value For the same data, the one-sided P-value is half the two-sided P-value So, a one-sided test will reject the null hypothesis more often

Two-sided Alternative

An alternative hypothesis is two-sided (≠) when we are interested in deviations in either direction away from the hypothesized parameter value

When does normal model with mean p and SD √(pq/n) work well as a model for the sampling distribution of a sample proportion?

Assumptions/Conditions: 1.) The Independence Assumption: - the individuals in the samples must be independent of each other - if not, the observations will resemble each other too much and distort the SD 2.) Randomization Condition - if your data come from an experiment in which subjects were randomly assigned to treatments or from a survey based on a simple random sample 3.) 10% Condition - sampling TOO much of the population can also be a problem - once you've sampled more than about 10% of the population, the remaining individuals are no longer really independent of each other 4.) Success/Failure Condition - you should have at least 10 successes and 10 failures in your data

Sampling Distribution Models- A Summary

At the heart is the idea that the STATISTIC ITSELF IS A RANDOM VARIABLE, we don't know what it will be because it comes from a random sample - this sample to sample variability is what generates the sampling distribution - the sampling distribution shows us the distribution of possible values that the statistic could have had For the mean the proportion, the CLT tells us that we can model their sampling distribution directly with a normal model Two basic truths about sampling distributions are: 1.) Sampling distributions arise because samples vary. Each random sample will contain different cases and, so, a different value of the statistic 2.) Although we can always simulate a sampling distribution, the Central Limit Theorem saves us the trouble for means and proportions.

Hypothesis Test

Compares a hypothesized value for the population parameter to the value of the statistic from our random sample. If we find that our sample statistic is highly inconsistent with the hypothesized value of the parameter, then we throw out the hypothesized value

Sampling Distribution

Different random samples give different values for a statistic. The distribution of the statistics over all possible samples is called the sampling distribution. - The sampling distribution model shows the behavior of the statistic over all the possible samples for the same size n - The sampling distribution model for how a statistic from a sample varies from sample to sample allows us to quantify that variation and to make statements about where we think the corresponding population parameter is

Sampling Distribution vs. Distribution of the Sample

Distribution of the Sample - when you take a sample, you always look at the distribution of the values, usually with a histogram, and you may calculate summary statistics; this is wise Sampling Distribution - an imaginary collection of all the values that a statistic might have taken for all possible random samples- the one you got and the ones that you didn't get - we use the sampling distribution model to make statements about how the statistic varies

Errors

I. The null hypothesis is true, but we mistakenly reject it (false positive) - the probability of a type 1 error is denoted as α II. The null hypothesis is false, but we fail to reject it (false negative) - the probability of a type II error is denoted as β THE ONLY WAY TO REDUCE BOTH TYPES OF ERROR IS TO COLLECT MORE EVIDENCE/DATA

Margin of Error

In a confidence interval, the extent of the interval is on either side of the observed statistic value is called the margin of error. - used to describe uncertainty in estimating the population value - almost any population parameter (proportion, mean, or regression slope) can be estimated with some margin of error - a margin of error is typically the product of a critical value from the sampling distribution and a standard error from the data - smaller margin of error corresponds to a confidence interval that pins down the parameter precisely - a large margin of error corresponds to a confidence interval that gives relatively little information about the estimated parameter

De Moivre claimed that the sampling distribution can be modeled well by a Normal model, but does it always work?

No, it doesn't always work - his claim is only approximately true (which is okay, as models are only supposed to be approximately true)

Differences Between t-models and Normal

Student's t-models are unimodal, symmetric, and bell-shaped, just like the Normal. However, t-models with only a few degrees of freedom have longer tails and a larger standard deviation than the Normal (that's what makes the margin of error bigger). As the degrees of freedom INCREASE, the t-models look more and more like normal. A t-model with infinite degrees of freedom is exactly Normal (not likely though)

P-Value

The (conditional) probability of observing a value for a test statistic at least as far from the (null) hypothesized value as the statistic value actually observed if the null hypothesis is true. A small p-value indicates either that the observation is improbable or that the probability calculation was based on incorrect assumptions. The assumed truth of the null hypothesis is the assumption under suspicion. A large p-value just tells us that we have insufficient evidence to doubt the null hypothesis. In particular, it does not prove the null to be true. How small the p-value has to be to reject the null hypothesis is highly context-dependent. Your conclusion about any null hypothesis should be accompanied by the P-value of the test. The p-value is NOT the probability that the null hypothesis is true

Central Limit Theorem

The Central Limit Theorem tells us that the sampling distribution of both the sample proportion and the sample mean are Normal - the sampling distribution of the mean is Normal, no matter what the underlying distribution of the data is - the CLT says that this happens in the limit, as the sample size grows - the mean of a random sample is a random variable whose sampling distribution can be approximated by a Normal model. The larger the sample, the better the approximation will be - essentially follows the same assumptions we saw for modeling proportions

Degrees of Freedom

The degrees of freedom of a distribution represent the number of independent quantities that are left after we've estimated the parameters. Simply the number of data values, n, minus the number of estimated parameters (for means, that's just n-1)

Effect Size

The difference between the null hypothesis value and the true value of a model parameter

Effect Size

The distance between the null hypothesis value and the truth - the effect size is central to how we think about the power of a hypothesis test - larger effect is easier to see and results in larger power (smaller chance of making a type II error) - small effects are naturally more difficult to detect

For the sampling distribution of y-bar:

The mean is μ

For the sampling distribution of p-hat:

The mean is ρ

Critical Value

The value in the sampling distribution model of the statistic whose P-value is equal to the alpha level Often denoted with an asterisk, as z* or t*

Sampling Variability (sampling error)

The variability we expect to see from one random sample to another. It is sometimes called sampling error, but sampling variability is the better term

To use a Normal Model, we need to specify two parameters:

its mean and its standard deviation - the center of the histogram is naturally at p, so that is the mean of the Normal model

Standard Error

When we estimate the standard deviation of a sampling distribution using statistics found from the data, the estimate is called a standard error - e.g. Facebook use amongst young people: 156 people responded to survey and 48 said they update their status at least daily - we have a sample proportion of p-hat= 48/156, or 30.8% - because of sampling variability, drawing a second sample likely wouldn't have a proportion of 30.8% What can we really say about p? - we can't say that 30.8% of all users update their status this much, nor can we say it's probably close to 30.8%.

Why are proportions special in hypothesis testing?

When we test a hypothesis about a proportion, we use the hypothesized null rather than the observed proportion to find the P-value - we use the observed p-value to calculate a standard error - when the hypothesized null value is close to the observed value, the SE and SD are similar, so the usual relationship between the test and interval works reasonably well


Set pelajaran terkait

Completing the application, underwriting, and delivering the policy (primerica)

View Set

Duaa - Congrats etc (Transliteration and/or Translation)

View Set

Reducing Your Risk of Diabetes and Other Chronic Diseases.

View Set

Multiple Regression Analysis and Beyond Statistical Regression

View Set

Aceable Agent Level 24 Fair Housing Law

View Set

GA-Introduction to Healthcare Science (Part II)

View Set