Ch. 9: Significance Tests

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

null hypothesis

claim we seek evidence against ("no difference") -tested by stats tests (asses strength of evidence against the Ho) -easier to make Ha 1st

How are Type I & Type II errors related?

inversely

What happens if the data don't support Ha?

don't run test (e.g. 1 sided & get side not interested in)

significance level

fixed value to compare p-value to -probability of reaching wrong conclusion when null hypothesis is true

What are 2 possible conclusion for a significance test?

reject Ho or fail to reject Ho -p-value small: reject Ho & conclude Ha (in context) -p-value big: fail to reject Ho means can't conclude Ha (in context)

Type I error

reject Ho when Ho true = α -conclude that something does something when it actually doesn't

What are some common errors students make in conclusions?

reporting sample stats instead of population stats -accepting null

What is probability of Type I error? What can we do to reduce probability of Type I? What are drawbacks?

-decreasing α decreases the risk of Type I, BUT also decreases power of test to detect mu = 1 (i.e. since P(Type II) = 1 - power, lower power means greater chance of making Type II.

How is power related to the probability of Type II error? Will you be expected to calculate the power of a test on the AP?

-probability of Type II error for alternative: β -probability of reaching right conclusion when alternative true

What is the difference between a 1-sided & 2-sided alternative hypothesis? How can you decide which to use?

1-sided: interested only in bigger/ only in smaller than null hypothesis value 2-sided: interested in both bigger/ smaller than null hypothesis

Remember these rules about significance and sample size.

1. A smaller sample size requires stronger evidence to reject the null. 2. Higher power gives better chance of detecting a difference when it really is there. 3. At any significance level & desired power, detecting a small difference requires a larger sample than detecting a large one.

How do you ensure you're using stats tests wisely?

1. Don't ignore lack of significance since small differences that are detectable only with large sample sizes can be of great practical significance. -When planning a study, verify that the test you plan to use has a high probability (power) of detecting a difference of the size you hope to find. 2. Statistical Inference Is Not Valid for All Sets of Data (i.e. Formal statistical inference cannot correct basic flaws in the design.) 3. Beware of Multiple Analyses since reasoning behind statistical significance works well if you decide what difference you are seeking, design a study to search for it, and use a significance test to weigh the evidence you get (but may have little meaning in other settings).

What are 3 components of conclude?

1. Because p 2. reject/ fail to reject 3. convincing evidence to believe

What is the 4 step process for a 1 sample z test for a proportion?

1. State: hypotheses want to test, at what significance? -parameter, Ho, Ha -by name: 1 sample z test for a proportion (or by formula) 2. Plan: choose appropriate inference method & check conditions (normal, random, independent) 3. Do: if conditions met, perform calculations 1) compute test statistic 2) find p-value 4. Conclude: interpret test results in context of problem -always check whether data give evidence against Ho

What are the 3 conditions for conducting a significance test for a population mean?

1. random ("quote") 2. normal -population stated as normal -CLT -assume by graphing & state no obvious outliers/ string skew 3. independent: 10% condition or randomized experiment (because no comparative population)

What is the plan process for a significance test for a population proportion?

1. random: quoted 2. independent: if sampling without replacement, use the 10% rule (10n <= N); if organized experiment, assume independence (basically just don't affect each other). 3. normal: since assume Ho is true when performing a significance test, we use the parameter value specified by the normal condition (po) when checking. n(po) && n(1 - po) >= 10

How do we answer the question of how many observations we need?

1. significance level: protection against Type I error: significant result when actually not significant 2. practical importance: how large a difference between p hat & p or x bar & mu is important 3. power: how confident we want to be that the study will detect a difference of size thought to be important -i.e., if want a smaller alpha, higher power, or want to detect a small difference more than you want to detect a large one, you will need a larger sample size

What do you do if given t/ z only & asked to find p value?

1. use t cdf for t's 2. use normal cdf for z's (BUT look at Ha for lower/ upper and if not equal, take p value from t cdf & multiply by 2 to get p value

What is another way to interpret the p value?

An average difference of ____ between groups would happen ____ % of the time just by chance in random samples of # population when the true population is u = 0.

Conclude the test statistic.

Because the p value of _____ (p value) is less than or greater than alpha = #, we ________ (reject/ fail to reject) the null/ We do/ don't have convincing evidence to believe ______ (context).

Interpret the conditional probability: P(p hat <= 0.64 | p = 0.80) = 0.0075.

If the H0 is true, and _______ (H0 happens in context), there's a/ less than a ___ chance that _______ (statistic happens in context). The _____ (small/ large) probability gives strong evidence against Ho/ Ha and in favor of Ha: p/ u </> parameter value.

Interpret the power of a test given mu & power.

If, in fact, the mean _______ (x variable) is ______, there is a ____ (power) % chance that we will reject the null hypothesis in our test. That is, there is a ____ (power) % chance that we will decide that the mean (x-variable ) is ______ (alternative hypothesis inequality).

p value

Probability, computed assuming Ho is true, the statistic would take a value as extreme as or more extreme than the one actually observed. -smaller p value: stringer evidence against Ho provided by the data

Can you use a calculator for the Do step? Are there any drawbacks?

Stat --> Test 1 Prop Z Test

Why is it important for drug studies to be double-blind?

The double blind component ensures the results can be attributed to the treatments because neither the researchers nor the subjects knew what treatment each subject was getting.

How do you interpret the p-value?

The probability, computed assuming __Ho is true, that the statistic (p hat or x bar) would take a value as extreme as or more extreme than ___ is ___.

What 4 factors affect the power of a test? Why does this matter?

To get more power: 1. B: if beta increases power, change the alpha 2. E: effect size Ha farther from my Ho 3. A: alpha increase alpha 4. N: n = sample size increase sample size to decrease standard deviation of sampling distribution --> taller & skinnier (less overlap) picture: (beta is other curve)

How do you know when there's a difference?

When a null hypothesis ("no effect" or "no difference") can be rejected at the usual levels (α = 0.05 or α = 0.01), there is good evidence of a difference. But that difference may be very small. When large samples are available, even tiny deviations from the null hypothesis will be significant.

Should a significance level be fixed?

Yes, because: -p value < α means that you reject the Ho & conclude the Ha -p value >= α means fail to reject null hypothesis & can't conclude Ha

Can you use confidence intervals to decide between 2 hypotheses? What is an advantage to using confidence intervals for this purpose? Why don't we always use confidence intervals?

Yes, if we use a 2-sided test, use a CL that matches (1-α level) e.g. 95% confidence level means use an α = 0.05 -slight difference in SE (p hat for CI & po for test statistic) Advantage: CL gives value range BUT sometimes just need yes/ no & CI less accurate since doesn't use Ho

How will you know whether you committed error?

You won't, but you can judge the consequences of each & can only commit 1 or the other.

What does the CI give that the test statistic doesn't?

a range of plausible values for the parameter (population mean)

alternative hypothesis

claim we hope/ suspect to be true ("is different") -find evidence for Ha -can be 1-sided if only interested in 1 outcome or 2-sided if don't have specific direction in mind

paired data

data that involve making two observations on the same individual, or one observation on each of two similar individuals -if measure same quantitative var 2 times, make comparisons by analyzing the differences in each pair -

Type II error

fail to reject Ho when Ho false β = 1 - power -doesn't do something when does

How do you calculate P-values using t-distributions?

if only given t, use t cdf -calculator: T-test -state: 1 sample t test for mu

What is stronger: the connection between two-sided tests and confidence intervals for proportions or means?

means since both inference methods for means use the standard error of the sample mean in the calculations

test statistic

measures how far a sample statistic diverges from what is expected if the Ho is true (in standardized units): test statistic = (statistic - parameter) / (statistic standard deviation) -not on formula sheet (just these words) -i.e. how far is sample result from null parameter

paired t procedures

one-sample t procedures used to perform inference about the mean difference µd -mean difference = 0 in Ho

When are the results of a study statistically significant?

p-value < alpha

power

probability that test will reject Ho at chosen significance level when specified alternative parameter value is true (Type II). -odds of finding difference you're looking for -odds I find difference I want (Ha) --> reject Ho, Ho true

What test statistic do we use when testing a population mean? Is it on the formula sheet?

test statistic = (statistic - parameter)/ pop std dev t = (x bar - mu) / sigma

What is the formula for a test statistic?

z = (p hat - po)/ √((po(1 - po))/ n)

What are common significance levels?

α = 0.05 α = 0.01 α = .10


Set pelajaran terkait

FINANCE CHAPTER 6 TRUE/FALSE & MULTIPLE CHOICE

View Set

Real Estate License Law Questions

View Set

MGMT 309 Exam 3 Ch. 17, 13, 12, 10

View Set

15. Distensibilidad vascular y funciones de los sistemas arterial y venoso

View Set