BIOS Test 3

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

critical value examples

*on notebook*

confidence intervals for a population mean example

*slide 32-34 on slideset - sigma known*

sampling distribution of p-hat

*written in 11/11 notes*

estimating sigma[sample mean1 - sample mean2] (two different types of two-sample t tests, depending on how standard error is calculated)

- "equal variances" or "pooled" option assumes that the two populations have the same variance - "unequal variances" option makes no assumption regarding the equivalence of the population variances

null AND alternative hypotheses

- H0 should never overlap with HA - they are intended to be set in such a way that one must be true and the other false - a researcher seeks evidence against the null hypothesis as a way of bolstering the alternative hypothesis

confidence level

- [of a confidence interval] refers to the success rate of the method in capturing the parameter of interest; can be controlled through the reliability coefficient - increasing the reliability coefficient increases the confidence level - denoted (1-a)100% - over the collection of ALL (1-a)100% confidence intervals that could be constructed from repeated random samples of size n, (1-a)100% will contain the population parameter of interest - we can be (1-a)100% confident that a single computed (1-a)100% confidence interval contains the population parameter of interest

why not just use X (# of successes)

- a count of successes is only meaningful in the context of the total number of observations, so it is not useful for comparing results from different studies - we are now interested in the proportion of some outcome in a population - the parameter of interest is a population proportion, denoted p

parameter estimation

- a full conclusion regarding parameter estimates should contain: a. population on which inferences are made (if possible) b. parameter estimate (including units of measurement) c. an assessment of uncertainty (either the standard error of the mean or a confidence interval)

crossover design

- all subjects receive both treatments in random order; subjects are independently and randomly assigned to one of two groups a. those assigned to group 1 receive treatment 1 first followed by a washout period, then receive treatment 2 b. those assigned to group 2 receive treatment 2 first followed by a washout period, then receive treatment 1 - ex: mCPP and weight loss on slide 12-13

hypothesis tests (tests of significance)

- assess the evidence provided by data about some claim concerning a population parameter - start with a careful statement of the claims we want to test: a. the null hypothesis (H0) b. the alternative hypothesis (HA or Ha or H1)

chi-squared test for goodness of fit

- assesses whether the observed counts "fit" the distribution outlined by H0 - requires that both expected counts have a value of 5.0 or greater

requirements for inference about p1 - p2

- both samples can be treated as SRS's - the conditions for a binomial distribution are satisfied in both samples - both sample sizes must be big enough

how do we compare the two populations?

- by doing inference about the population parameter, mu1 - mu2 between the population means - the statistic that estimates this difference is sample mean1 - sample mean2

problems with the Wald CI for p

- can yield unsatisfactory results, especially when p is near 0 or 1 when n is small (proportions cannot be negative or greater than 1) - overestimates the precision of estimating p...too "liberal" - may result in a lower limit for the CI that is less than 0 or an upper limit for the Ci that is greater than 1

chi-squared distribution

- chi-squared tests based on an approximation to chi-squared distributions - not symmetric...always skewed right (anything negatively squared is positive so graph starts at 0) - there is a different chi-squared distribution for each possible value of the degrees of freedom - total area under the curve = 1 - in a chi-square goodness-of-fit test involving k outcomes, when the null hypothesis is true, the test statistic X^2 has the chi-square distribution with k - 1 degrees of freedom a. the p-value of the chi-square test is the area to the right of the calculated X^2 statistic under this chi-square distribution (slide 14) *examples on slides 14-17*

significance levels (alpha)

- chosen, not calculated - the probability of making the mistake of rejecting a true null hypothesis (type I error) - if p-value <= alpha, then reject the H0, and we say that the data are statistically significant at the alpha level - if p-value > alpha, then fail to reject the H0 and we say that the data are not statistically significant at the alpha level

null hypothesis

- claim tested by a hypothesis test a. H0 is a statement that the population parameter is equal to some claimed value b. H0 is a statement of no difference, no effect, or no association c. H0 is usually an accepted belief - hypothesis tests are designed to assess the strength of evidence AGAINST the null hypothesis - the population parameter theta takes on a specific value (theta 0)

Pearson's chi-square test statistic

- compares how close the sample data are to what's expected if H0 was true - larger differences observed data and expected values produce a larger test statistic, which increases the chances of rejecting H0

confidence interval - paired data

- confidence interval for the mean difference in the responses for the two groups within pairs of subjects in the entire population - with paired data, the (1-a)100% confidencde interval for mu is calculated by: sample mean +- t*(s/square root of n) - ex: slide 26-28

expected counts

- consider a binomial random variable X, the number of successes out of n fixed identical, independent trials a. to test the null hypothesis that the proportion has a specified value: H0: p1 = p0 compute the expected number of successes and failures b. expected number of successes = np0 c. expected number of failures = n(1 - p0) - hypothetical counts and do not have to be round numbers - must sum to n a. simply a statement of how the n observations would be expected to fall into the 2 possible outcomes, according to the null hypothesis

comparing test statistics and critical values

- critical values can also be used as standards of comparison for test statistics for determining results of hypothesis tests - critical values separate the rejection regions from the non-rejection region a. rejection regions: values of the test statistic that lead to rejection of the null hypothesis b. non-rejection region: values of the test statistic that do not lead to rejection of the null hypothesis - any value of the test statistic that is in the non-rejection region, between -critical value and +critical value, would result in a p-value greater than alpha, and H0 will not be rejected at the significance level a. fail to reject H0 if |test statistic| < +critical value - any value of the test statistic that is in the rejection region, less than -critical value or greater than +critical value, would result in a p-value less than or equal to alpha, and H0 will be rejected at the significance level a. reject H0 if |test statistic| >= +critical value - *ex: body temperature on slide 42-43*

CI's for p1 - p2

- estimate +/- [critical value X SE(estimate)] - the estimate is your sample statistic, here p-hat1 - p-hat2 - the critical value depends on your desired level of confidence a. 95% CI: z = 1.96 b. 99% CI: z = 2.575 - the standard error of the estimate can be estimated many different ways

Wald CI

- estimating the standard error of p-hat by replacing p with p-hat in the formula *written in 11/11 notes*

conservative approach to hypotheses

- ex: number of children - it is more difficult to reject a null hypothesis when you are using a two-sided alternative - when you do reject a null hypothesis with a two-sided alternative, it is more convincing than If you had used a one-sided alternative - there is a mathematical equivalence between confidence intervals and hypothesis tests with two-sided alternative hypotheses - the confidence level of a confidence interval is defined as (1-a)100% a. this is the same alpha (significant level) used in hypothesis testing - you will fail to reject H0 at the significance level if theta0 is contained in the (1-a)100% CI - you will reject H0 at the significance level if theta0 is not in the (1-a)100% CI

facts about t distributions

- family of distributions with members sharing characteristics - each member is identified by its degrees of freedom (df) which is the parameter of a t distribution a. the degrees of freedom measure the amount of information available in the data that can be used to estimate sigma b. measure reliability of s as an estimate of sigma - degrees of freedom are lost when estimating one parameter from an estimate of another parameter a. to estimate sigma, we must first estimate mu b. deviations from mu are estimated using deviations from the sample mean - using the sample mean as an estimate of the population mean in the process of estimating the population standard deviation leads to one restriction: a. deviations from the sample mean must sum to zero b. only n - 1 data points are free to vary due to this restriction before the last one is fixed

95% confidence intervals

- for a given sample, if we estimate that mu lies somewhere in the interval sample mean +/- 1.96(sigma/square root of n), we'll be right approximately 95% of the time - when sampling from a normally distributed population with known standard deviation sigma, the formula sample mean +/-1.96(sigma/square root of n) yields a 95% confidence interval for the population mean mu of the form: [sample mean - 1.96(sigma/square root of n), sample mean + 1.96(sigma/square root of n)] - over the collection of all 95% confidence intervals that could possibly be constructed from repeated random samples of size n, 95% will contain the population parameter mu - *slide 13 of slideset*

finding p-values using a t-table

- go to the row with the appropriate df - find |t| in that row, or where it should be if it doesn't occur in that row - use the "two-sided p-value" row to find p a. if |t| is found in the table, go to the bottom of the column containing |t| to find the 2-sided p-value b. if |t| isn't found in the table, go to the bottom of the columns that are on either side of |t|; report the p-value as less than the "two-sided p-value" for the column on the left and greater than the "two-sided p-value" for the column on the right

p-values

- how likely is the effect observed in your sample data if H0 is true? - the strength of the evidence against H0 is quantified by a probability called a p-value - the probability that random chance would yield the observed results, or results even further from H0, if H0 is actually true - a p-value measures how compatible the observed sample data are with the null hypothesis - small p-values are evidence against the null hypothesis since they indicate that the observed sample data would be unlikely if the null hypothesis was true - large p-values fail to provide evidence against the null hypothesis since they indicate that the observed sample data would be reasonably likely if the null hypothesis was true - free throw example

sampling distribution of p-hat1 - p-hat2

- how well does the statistic p-hat1 - p-hat2 estimate the parameter p1 - p2? a. we need to know about the sampling distribution of p-hat1 - p-hat2 b. when two random variables are Normally distributed, the new variable "difference" also follows a Normal distribution, centered on the difference of the two variables' means and with variance equal to the sum of the two variables' variances - has to be >10

sampling distribution of sample mean1 - sample mean2

- how well does the statistic sample mean1 - sample mean2 estimate the parameter mu1 - mu2? a. we need to know about the sampling distribution sample mean1 - sample mean2 b. when two random variables are normally distributed, the new variable "difference" also follows a normal distribution, centered on the difference of the two variables' means and with variance equal to the sum of the two variables' variances - when sample mean1 and sample mean2 are both normally distributed (mu1 - mu2, sigma1^2/n1 + sigma2^2/n2) -> slide 9 - the standard error of the difference between two sample means is sigma[sample mean1 - sample mean2] = square root of {sigma1^2/n1 + sigma2^2/n2}

pooled t-test (independent samples)

- if the two populations have the same variance, there is really only one variance parameter to estimate...sigma1^2 = sigma2^2 = sigma^2 a. s1^2 and s2^2 are both estimates of the same parameter, sigma^2 b. under this assumption, the two sample variances are "pooled" together, and a weighted average is taken to create a single estimate of sigma^2 called sp^2 - *slide 13* - if and only if 2 populations are each normally distributed with common standard deviation (sigma1 = sigma2 = sigma) then under the null hypothesis H0: mu1 = mu2, the test statistic has exactly the t distribution with n1 + n2 - 2 degrees of freedom (slide 14) - we could choose to use the pooled method for tests and CIs, but only if the two population variances really are equal a. the pooled test will result in unreliable results when unequal variances exist b. the "unequal variances" t procedures are almost always more accurate than the pooled procedures

rare event rule

- if, under a given assumption, the probability of a particular observed event is exceptionally small, we conclude that the assumption is probably not correct - if an event happens that would rarely happen if a claim was true, then the fact that it happened is good evidence that the claim is false

t distributions in the real world

- in most real-world scenarios, inference about a single population mean should use t-based inference methods - z-based procedures are invalid when s is used to estimate sigma - when making inference about a single population mean when sigma is unknown, degrees of freedom is calculated by n - 1 a. lose 1 df by estimating mu prior to estimating sigma

the sample proportion

- interested in the unknown proportion p of a population that has some outcome a. call the outcome of interest a "success" - the statistic that estimates the parameter p is the sample proportion p-hat - p hat = x/n = number of successes in the sample/total number of individuals in the sample

comparison of two means

- it is more common to compare the means of two different populations, where both population means are unknown, than it is to test claims about the mean of one population - often, the two groups have received different treatments or undergone different exposures - an independent samples t test, often simply called a two-sample t test, uses independent samples a. two samples are independent if the data points in one sample are unrelated to the data points in the second sample - the groups that we want to compare are Population 1 and Population 2 a. we have a separate SRS from each population or responses from two treatments in a randomized comparative experiment (a subscript shows which group a parameter or statistic describes) - *slide 4*

T-table

- lists critical values of t distributions a. each row in the table refers to a t distribution with a particular df:(t(df) b. entries in the body of the table are t values c. the top row provides various confidence levels that will be useful when finding critical values for confidence intervals d. the bottom row provides two-sided p-values which will be useful in hypothesis testing

Agresti-Coull (plus 4) CI for p

- makes a very simple adjustment to eh Wald interval, but works almost as well as the Wilson interval - works well as long as n>= 10 and the CI 'C' is at least 90% - can be treated as a simple random sample - conditions for a binomial distribution are satisfied - the confidence level 'C' is at least 90% - the sample size must be big enough (considered large enough if at least 10)

test statistic - "bridge"

- measure of how far the data diverge from H0 - calculated from the sample data - large values show that the data are far from what we would expect if H0 was true - used for determining whether there is significant evidence against H0 - used to compute p-values and/or compared to critical values

margin of error

- measure of the precision (reliability) of the point estimate - constructed by multiplying a reliability coefficient by the standard error the the point estimate

comparative studies

- more convincing than one-sample investigations - comparative inference is more common than one-sample inference - a common design, called a paired design, uses one-sample procedures to compare two groups a. paired t procedures

confidence intervals and critical values

- not all confidence intervals have confidence level 95% - to find a level (1-a)100% confidence interval, we need to find the appropriate reliability coefficient - the reliability coefficient needed to construct a (1-a)100% confidence interval is the value of the standard normal distribution z* that marks off central area 1-a under the standard normal density curve - the central area 1-a lies between two points z* and -z*

inference about a population proportion

- now we turn to confidence intervals and testing hypotheses for binary (dichotomous) variables - recall: the number of successes X is a binomial random variable when: a. there are a fixed number of independent trials b. there are two possible outcomes for each trial c. the probability of success is the same for each trial - binomial distribution of X can be approximated by a Normal distribution

paired data

- objective of a paired comparisons test is to eliminate a maximum number of sources of extraneous variation by making the pairs similar with respect to as many variables as possible - mitigate confounding effects of extraneous variables - in a paired design, each subject/observation is matched to another subject/observation - examples: a. same patients before and after treatment b. identical twins used in genetic studies c. patients matched by age, race, and gender - typically occur out of one of the following three scenarios: a. longitudinal designs b. matched-pair designs (twin studies) c. crossover designs

comparing two proportions

- often, we want to compare two populations or the responses to two treatments based on two independent samples - we are now interested in the difference between proportions of two independent populations - we are comparing Population 1 and Population 2 a. we have a separate SRS from each population or responses from two treatments in a randomized comparative experiment b. a subscript shows which group a parameter or statistic describes - we compare the populations by doing inference about the population parameter, p1 - p2 between the population proportions a. the statistic that estimates this difference is the difference between two sample proportions, p-hat1 and p-hat2

confidence interval

- provides a range of reasonable values that are intended to contain the parameter of interest and provides a measure of our confidence that the parameter of interest is actually contained in the interval - most take the form point estimate +/- margin of error

degrees of freedom

- refers to the number of scores that are free to vary a. suppose you are asked to pick any three numbers b. there are no restrictions, and then numbers are completely free to vary c. in standard terminology, there would be three degrees of freedom ~new example~ a. now, suppose you are asked to pick any 3 numbers, but they must sum 15 b. there is 1 restriction and you will lose some of the freedom to vary the three numbers that you choose c. two numbers are free to vary, but one is not d. you have lost 1 degree of freedom and there are two degrees of freedom - directly related to the sample size n a. s estimates sigma more accurately as the sample size increases, so as n increases, df increases b. the degrees of freedom determine the shape of a t-distribution c. the t distribution becomes increasingly like the standard normal distribution as df increases

estimated standard error of p-hat1 - p-hat2

- replace p1 - p2 with p-hat1 - p-hat2 in the formula (Wald CI)

conservative choice

- round df down (always) to next available df in the table - [for df of 57]: using df = 40 for 95% CI, t* is about 2.021 a. this method is guaranteed to give you a value greater than the true value b. in this case, the actual t* is 2.002465403

longitudinal study design

- same group of subjects is followed over time - the paired data is usually from the same subject before and after some treatment is applied - pretest/posttest data - ex: exercise and serum triglycerides on slide 6-7

density curves for t distributions

- similar to the standard normal distribution...they are symmetric, bell-shaped, and centered on 0 - t distributions have more area in their tails than the standard normal distribution a. the broader tails accommodate the additional uncertainty that comes from estimating sigma with s

inference about mu in realistic settings

- since we are estimating the value of the population standard deviation with the sample standard deviation, a new source of uncertainty, or unreliability, is introduced a. because of this additional uncertainty, we can't use a standard normal z distribution when making inferences anymore

estimated standard error of the sample mean

- since we don't know sigma, we can estimate it uses the sample standard deviation, s a. the estimated standard error of the sample mean is calculated by: standard error of the sample mean = s/square root of n

Agresti-Coull method

- start by adding 4 imaginary observations to the sample: two successes and two failures a. new, adjusted sample size ~n = n + 4 b. new, adjusted number of successes ~x = x + 2 - the resulting new proportion is called the plus-four proportion, which is defined in 11/11 notes *refer to 11/11 notes*

Agresti-Coull (+4) Method for comparing two proportions

- start by adding 4 imaginary observations: 1 success and 1 failure to each of the samples - new, adjusted sample sizes ~n1 = n1 + 2 and ~n2 = n2 + 2 - new, adjusted numbers of successes ~x1 = x1 + 1 and ~x2 = x2 + 1 - *note* ~p1 = ~1/~n1 = x1 + 1/n1 + 2 and ~p2 = ~x2/~n2 = x2 + 1/n2 + 2 - plug into formula

alternative hypothesis

- statement that the parameter has a value that is somehow different from the null hypothesis a. one-sided: it states that the parameter is larger than or smaller than the value hypothesized in H0 b. two-sided: it states that the parameter is simply different from the value hypothesized in H0

matched-pair design

- subjects are matched in pairs in such a way that the two members of a pair share similar characteristics, such as age, gender, disease severity, etc. a. one member of each pair is randomly assigned to one group, and the other member is assigned to the other group - ex: drug preventing premature birth on slide 9-10

t critical values

- t* is often written as tdf, 1- a/2 - makes it more clear that t* is a random variable that follows a t distribution with df degrees of freedom such that P(Tdf<t) = 1-a/2 a. that is, t* is the (1-a/2)th percentile of the t distribution with df degrees of freedom

pooled vs. unequal variances

- the "unequal variances" option works whether or not the underlying population variances are equal - ALWAYS use the "unequal variances" t test

getting around conditions of Z test

- the condition that the variable be normally distributed is typical for most statistical procedures; CLT holds - the condition that the sample be a SRS is typical for most statistical procedures a. if your data don't come from a probability sample or a sample randomized experiment, your conclusions may be challenged b. often, we are able to treat a sample as if it is an SRS even if it is not an SRS - Brigitte Baldi and David S Moore example (visual perception in optical illusions) vs. sociology example - the condition that we know the population standard deviation is unrealistic a. thus, z procedures for inference to one (or more) population mean(s) are not very useful, practically b. the good news is that we can alter the z procedures slightly so that we can get around this problem

T-table patterns

- the degrees of freedom no longer increase by 1, but rather by 20, then 100, etc. - two options to find a random t*: a. conservative choice b. use technology (R)

hypothesis testing - one-sample T test

- the on-sample t test is used to test hypotheses about a single population mean mu when the population standard deviation is unknown - to test the hypothesis H0: mu = mu0 when sigma is not known, we must use the one-sample t statistic: t = [sample mean - mu0]/[s/square root n] which has a t distribution with df = n - 1

interpreting confidence intervals

- the parameter value is fixed (not random), so it either falls in the confidence interval or it does not - never talk about the probability or confidence that the parameter is in an interval - the interval is random, so we talk about your confidence that the interval contains the true parameter value - a (1-a)100% confidence interval is an interval computed from sample data with a method that produces an interval containing the true value of the parameter of interest with a probability of 1-a - interpretations for the (1-a)100% Cl: (LB, UB) a. we can be (1-a)100% confident that the interval ranging from LB to UB contains the true value of the parameter of interest

properties of confidence intervals

- the point estimate of the parameter will always be the center of a confidence interval a. a CI for a population mean will always be centered on the sample mean - decreasing the sample size will lead to a wider interval, all else held constant - increasing the confidence level will lead to a wider interval, all else held constant - increasing the standard deviation will lead to a wider interval, all else held constant

requirements for T-based CI for mu

- the population standard deviation, sigma, is unknown, but the sample standard deviation, s, is known - the sample can be regarded as a simple random sample (SRS) from the population - the observations from the population are normally distributed and/or the sample size is large enough (CLT holds)

requirements for inference about p

- the sample can be treated as a simple random sample - the conditions for a binomial distribution are satisfied - the sample size must be big enough (case-by-case)

requirements Wald (large-sample) CI for p

- the sample can be treated as a simple random sample - the conditions for a binomial distribution are satisfied - the sample size must be big enough (with at least 15 successes and 15 failures) - for comparing two samples, apply these requirements to both samples separately, except it has to be at least 10 for each sample size

requirements Agresti-Coull (+4) CI for p1 - p2

- the samples are independent and can be treated as SRS's - the conditions for a binomial distribution are satisfied for both samples - the confidence level C is at least 90% - the sample size must be big enough (greater than or = 5)

robustness

- the t confidence interval and test are exactly correct when the variable of interest is exactly normally distributed in the population...however, no real data are exactly normal - in order to find how dependable the t procedures are in practice, we must introduce robustness - a confidence interval or hypothesis test is called robust if the confidence level or p-value doesn't change very much when the procedure's requirements are violated - the t procedures are considered robust to most violations, unless the data are very skewed and the sample size is really small (less than 15)

finding t critical value from a t table

- the value t* can be found in the t-table: 1. find the row that corresponds to the appropriate degrees of freedom 2. using the "confidence level" row at the top of the table, find the column that corresponds to the desired confidence level a. the value at the column/row intersection represents the critical value t* - BMI example in notes

paired T-test

- to compare the responses to the two groups in a paired design, find the difference between the responses within each pair a. then apply the one-sample t procedures to these differences - the parameter mu in a paired t procedure is the mean difference in the responses to the two groups within pairs of subjects in the entire population - a note about hypotheses: a. H0: mu = 0 b. HA: mu does not = 0 c. the null hypothesis says that group membership has no effect on the outcome of interest - ex: slide 20-22, 24

statistical estimation reasoning

- to estimate the unknown population mean mu, use the sample mean of the random sample - recall: for any normal distribution, about 95% of all values fall within 2 standard deviations of the mean - exactly 95% of all values fall within 1.96 standard deviations of the mean - if we repeatedly sample from a population with mean mu, the maximum distance separating the sample mean and mu is 1.96(sigma/square root of n) for 95% of all samples

confidence levels and critical values

- to find a level (1-a)100% confidence interval, we need to find the central area 1 - a under the Tdf distribution - the central area 1 - a lies between two points t* and-t* a. values of distributions (like t*) that mark off specified areas are called critical values of the distribution b. the critical value tells you how many standard errors should be subtracted from and added to a sample statistic to make an interval that contains the population parameter in a specified proportion of all samples

types of error

- type 1: the error committed when a true H0 is rejected a. can think of alpha as the type 1 error rate that you are willing to accept b. whenever we reject a null hypothesis there is always the risk of a type-1 error - type 2: the error committed when a false null hypothesis is NOT rejected a. the probability of committing a type 2 error is denoted by beta b. there is always the risk of a type 2 error when the H0 is NOT rejected c. we generally exercise no control over beta, although we know that in most practical situations it is larger than alpha

statistical inference

- used to determine how trustworthy our conclusions are about a population based off a sample - based on the sampling distribution of statistics - confidence intervals - hypothesis tests/tests of significance

goodness-of-fit test

- used to test the hypothesis that an observed frequency distribution fits (or conforms to) some claimed distribution - chi-squared test for goodness of fit may be used to test hypotheses about a single proportion a. H0: p = p0 b. HA: p does not = p0

chi-squared test of homogeneity

- used to test the hypothesis that different populations have the same proportions of some characteristics - H0: p1 = p2; HA: p1 does not = p2 - to test these hypotheses, the data are often represented in a 2x2 table, which is a contingency (two-way) table with two rows and two columns

critical values

- values of distributions (like z*) that mark off specified areas under the probability density function are called critical values of the distribution - the critical value tells you how many standard errors should be subtracted from and added to a sample statistic to make an interval that contains the population parameter in a specified proportion of all samples - when sampling from a normally distributed population with known standard deviation sigma, the formula sample mean +/- z*(sigma/square root of n) yields a confidence interval with confidence level (1-a)100% for the population mean mu of the form: [sample mean - z*(sigma/square root of n), sample mean + z*(sigma/square root of n)] - z* is often written as Z1 - a/2 a. makes it more clear that z* is a standard normal random variable such that P(Z < z*) = 1 - a/2 b. z* is the (1-a/2)th percentile of the standard normal distribution - ex: (1-a)100% = 95% to a = 0.05

independent samples

- we want to test the following hypotheses: a. H0: mu1 - mu2 = 0 (mu1 = mu2) b. HA: mu1 - mu2 does not = 0 (mu1 does not = mu2) - we want to test whether the data contained in the samples provides enough evidence to suggest that there is a difference between the means of the two populations a. is the difference in sample means more than we'd expect to see based on chance alone?

estimated standard error of sample mean

- when making inferences about a population mean, sigma will generally be unknown - sigma can be estimated using the sample standard deviation, s - the estimated standard error of the sample mean is calculated by SE(sample mean) = s/square root of n

using R to perform two-sample T procedures

- when the data from the two samples are in the same column, and their group membership is recorded in a separate column, use this: a. t.test(y ~ x, data= ) - when the data from two samples are in two different columns (two different variables), use this: a. t.test(x, y, paired=FALSE)

two-sample t test (unequal variances)

- when two populations are each normally distributed, and no assumption is made regarding the equivalence of the variances for the two populations, then under H0: mu1 - mu2, the test statistic has an approximate t distribution under H0 (slide 17) - Welch-Satterthwaite approximation (slide 18)

choice of HA

- you should always use a two-sided HA unless you have a very good scientific reason to do otherwise (not equal to sign) - the decision to use a one-sided test should be made before data are collected with input from subject-matter experts and statisticians

Two-sample T test (unequal variances)

1. verify that the requirements are satisfied - both samples can be regarded as SRS from two distinct and independent populations - either: 1) both samples come from populations having normal distributions and/or 2) n1 >= 30 AND n2 >= 30 2. specify the null and alternative hypotheses: H0: u1 - u2 = 0; HA: u1 - u2 does not = 0 3. specify significance level, a 4. calculate test statistic (df will be given) 5. calculate the p-value: - if HA: mu does not equal mu0, then p-value is 2P(Tdf > |t|) 6. state your conclusions

chi-squared test for one proportion

1. verify that the requirements are satisfied - the sample can be regarded as a simple random sample - the conditions for a binomial distribution are satisfied - the conditions np0 >= 5 and n(1 - p0) >= 5 are both satisfied 2. specify the null and alternative hypotheses: H0: p = p0 vs. HA: p does not = p0 3. specify a, the significance level 4. calculate the test statistic (11/11 notes) 5. calculate the p-value 6. state your conclusions

steps for a hypothesis test for a mean mu with sigma known ("Z Test") *works same for t Test, except for calculating test statistic equation* - slide 3 of Quiz 9 last slideset

1. verify that the requirements are satisfied - we have a simple random sample - the population standard deviation is known - the variable appears to be normally distributed in the population 2. specify the null and alternative hypotheses 3. specify the significance level 4. calculate the test statistic: z = sample mean - H0/sigma sample mean = sample mean - H0/sigma/square root of n a. this test statistic has a standard normal distribution when H0 is true 5. calculate the p-value a. if HA: mu does not equal H0, then p-value is 2P(Z > |z|) 6. state your conclusions *body temperature example*

steps to make a Z interval

1. verify that the requirements are satisfied: - we have a simple random sample - the population standard deviation sigma is known - the variable appears to be normally distributed in the population 2. refer to the standard normal table to find the critical value z* = Za/2 that corresponds to the desired confidence interval 3. evaluate the margin of error: z*(sigma/square root of n) 4. using the calculated margin of error and the value of the sample mean, find the limits of the confidence interval: sample mean - margin of error and sample mean + margin of error

T-based confidence interval for mu

a (1-a)100% confidence interval for a population mean can be constructed using the following formula: sample mean +- (t*s/square root of n) where t* is the critical value for the t distribution with n - 1 degrees of freedom

confidence intervals for p

estimate +/- [critical value X SE (estimate)] - the estimate is your sample statistic, here p-hat - the critical value depends on your desired level of confidence a. 95% CI: z = 1.96 b. 99% CI: z = 2.575 - the standard error of the estimate can be estimated many different ways

complete journal-style conclusion

for a significant result, include: - population to which inferences may be made - causal inferences, if any (randomized experiments) - evidence upon which the conclusion is based (test statistic, df, p-value) - direction of effect (whether parameter estimate is higher or lower than hypothesized in the null) - magnitude of effect (estimated value of the parameter estimate...sample mean) - uncertainty estimate for the point estimate (estimated standard error of the statistic or confidence interval for the parameter)

point estimate

our best guess for the unknown parameter and is the center of the interval


संबंधित स्टडी सेट्स

Mrs. B's Money Management: Control Your Cash Flow

View Set

Amino Acids - Structure to full name

View Set

RN Learning Pharmacology Practice Quiz

View Set