AP Stats
binomial distribution (conditions)
1) binary? trials can be classified as success/failure 2) independent? trials must be indepent 3) number? the number of trials (n) must be fixed in advance 4) success? the probability of success (p) must be the same for each trial
geometric distribution (conditions)
1) binary? trials can be classified as success/failure 2) independent? trials must be indepent 3) trials? the goal is to count the number of trials until the first success occurs 4) success? the probability of success (p) must be the same for each trial
types of chi-square tests
1) goodness of fit: use to test the distribution of one group or sample as compared to a hypothesized distribution 2)homogeniety: use when you have a sample from 2 or more independent populations or 2 or more groups in an experiment. each individual must be classified based upon a single categorical variable 3)association/independence: use when you have a single sample from a single population. individuals in the sample are classified by two categorical variables
the sampling distribution of the sample mean (central limit theorem)
1) if the population distribution is normal the samplimg distribution will also be normal with the same meam as the population. additionally, as n increases the samplimg distribution's standard deviation will decrease 2) if the population distribution is not normal the sampling distribution will become more amd more normal as n increases. the sampling distribution will have the same mean as the population and as n increases the sampling distribution's standard deviation will decrease
interpreting a residual plot
1) is there a curved pattern? if so, a linear model may not be appropriate. 2) are the residuals small in size? if so, predictions using the linear model will be fairly precise 3) is there increasing (or decreasing) spread? if so, predictions for larger (smaller) values of x will be more variable
experimental designs
1)CRD-(completely randomized design)- all experimental units are allocated at random among all treatments 2)RBD-(randomized block design)- experimental units are put into homogeneous blocks. the random assignment of the units to the treatments is carried our separately within each block 3)matched pairs- a form of blocking in which each subject receives both treatments in a random order or the subjects are matched in pairs as closely as possible and one subject in each pair receives each treatment, determined at random
sampling techniques
1)SRS- number the entire population, draw numbers from a hat (every set of n individuals has equal chance of selection) 2)Stratified- split the population into homogeneous groups, select an SRS from each group 3)Cluster- split the population into heterogeneous groups called clusters, and randomly select whole clusters for the sample. 4)Census- an attempt to reach the entire population 5)Convenience- selects individuals easiest to reach 6)Voluntary response- people choose themselves by responding to a general appeal
chi-square tests df and expected counts
1)goodness of fit: df=# of categories-1 expected counts: sample size times hypothesized proportion in each category 2)homogeniety or association/independence: df=(# of rows-1)(# of columns-1) expected counts: ((row total)(column total))/ table total
what is a residual?
residual= y - y hat a residual measures the difference between the actual (observed) y-value in a scatterplot and the y-value that is predicted by the LSRL using its corresponding x value. in the calculator: L3= L2 - Y1(L1)
interpret LSRL "s"
s=__ is the standard deviation of the residuals. it measures the typical distance between the actual y-values (context) and their predicted y-values (context)
SOCS
shape- skewed left (mean< median); skewed right (mean > median); fairly symmetric (mean is about equal to the median) outliers- discuss them if there are obvious ones center- mean or median spread- range, IQR, or standard deviation note: also be on the lookout for gaps, clusters or other unusual features of the data set. make observations!
4-step process significance test
state: what hypotheses do you want to test, and at what significance level? define any parameters you use. plan: choose the appropriate inference method. check conditions. do: if the conditions are met, perform calculations. compute the test statistic and find the p-value. conclude: interpret the result of your test in the context of the problem
4-step process confidence intervals
state: what parameter do you want to estimate, and at what confidence level? plan: choose the appropriate inference method. check conditions. do: if the conditions are met, perform calculations. conclude: interpret your interval in the context of the problem
advantage of using a stratified random sample over an SRS
stratified random sampling guarantees that each of the strata will be represented. when strata are chosen properly, a stratified random sample will produce a better (less variable/more precise) information than am SRS of the same size.
unbiased estimator
the data is collected in such a way that there is no systematic tendency to overestimate or underestimate the true value of the population parameter. (the mean of the sampling distribution equals the true value of the parameter being estimated)
goal of blocking benefit of blocking
the goal of blocking is to create groups of homogeneous experimental units. the benefit of blocking is the reduction of the effect of variation within the experimental unite (context)
outlier rule
upper bound=q3 +1.5(IQR) lower bound=q1 -1.5(IQR) IQR=q3 -q1
can we generalize the results to the population of interest?
yes if: a large random sample was take. from the same population we hope to draw conclusions about
factors that affect power
1)sample size: to increase power, increase sample size 2)increase a(alpha): a 5% test of significance will have a greater chance of rejecting the null than a 1% test 3)consider an alternative that is farther away from u0: values of u that are in the alternative, but lie close to the hypothesized value are harder to detect than values of u that are far from u0.
two events are independent if...
P(B) =P(BlA) or P(B)=P(BlA^c) meaning: knowing that event a has occurred (or not occurred) doesn't change the probability that event b occurs
P(at least one)
P(at least one) =1- P(none)
interpret standard deviation
SD measures spread by giving the "typical" or "average" distance that observations (context) are away from their (context) mean
interpret LSRL "SEb"
SEb measures the standard deviation of the estimated slope for predicting the y variable (context) from the x variable (context) SEb measures how far the etimated slope will be from the true slope, on average
describe the distribution or compare the distributions
SOCS! shape, outlier, center, spread only discuss outliers if there are obviously outliers present. be sure to address SCS in context! if it says "compare" YOU MUST USE comparison phrases like "is greater than" or "is less than" for center and spread
interpret r^2
_% of the variation in y (context) is accounted for by the LSRL of y(context) on x(context). OR _% of the variation in y(context) is accounted for by using the linear regression model with x (context) aw the explanatory variable
why use a control group?
a control group gives the researchers a comparison group to be used to evaluate the effectiveness of the treatment(s). (context) (gauge the effect of the treatment compared to no treatment at all)
experiment or observational study?
a study is an experiment ONLY if researchers IMPOSE a treatment upon the experimental units. in an observational study researchers make no attempt to influence the results
linear transformations
adding "a" to every member of a data set adds "a" to the measure of position, but does not change the measures of speead or rhe shape. multiplying every member of a data ser by "b" multiplies the measures of position by "b" and multiplies most measures of spread by lbl, but does jot change the shape.
SRS
an SRS (simple random sample) is a sample taken in such a way that every set of n individuals has an equal chance to be the sample actually selected
DOES __ cause ___?
association is NOT causation! an observed association, no matter how strong, is not evidence of causation. only a well-designed, controlled experiment cam lead to conclusions of cause and effect.
explain a p-value
assuming that the null is true (context) the p-value measures the chance of observing a statistic (or difference in statistics)(context) as large as or larger than the one actually observed.
interpret r
correlation measures the strength and direction of the linear relationship between x and y. r is always between -1 and 1 close to zero = very weak close to -1 or 1 = stronger exactly 1 or -1 = perfectly straight line positive r = positive correlation negative r = negative correlation
binomial distribution (calculator usage)
exactly 5: p(x=5) = binompdf(n,p,5) at most 5: p(x< or = to 5) = binomcdf(n,p,5) less than5: p(x<5) = binomcdf(n,p,4) at least 5: p(x> or = 5) = 1- binomcdf(n,p,4) more than 5: p(x>5) =1- binomcdf(n,p,5) remember to define x,n and p
interpret LSRL slope "b"
for every one unit change in the x variable (context) the y variable (context) is predicted to increase/decrease by __ units (context)
finding the sample size (for w given margin of error)
for the mean: m=z*(o/(n^.5)) for the proportion: m=z*((p(1-p))/n)^.5) if an estimation of p is not given, use 0.5 for p. solve for n.
interpreting a confidence interval (not a confidence level)
i am __% confident that the interval from __ to __ captures the true __.
interpreting a confidence level (the meaning of 95% confident)
intervals produced with this method will capture the true population ___ in about 95% of all possible samples of this same size from the same population
two sample t-test phrasing hints, the null hypothesis and alternative hypothesis, conclusion
key phrase: DIFFERENCE IN THE MEANS the null: u1-u2= 0 OR u1= u2 the alternative: u1-u2<0, >0, not = to 0 u1-u2= the difference between the mean __ for all __ and the mean__ for all __. we do/(do not) have enough evidence at the 0.05 level to conclude the difference between the mean __ for all __ and the mean __ for all __ is __.
paired t-test phrasing hints, the null hypothesis and alternative hypothesis, conclusion
key phrase: MEAN DIFFERENCE the null: u of the difference= 0 the alternative: u of the difference <0, >0 does not = 0 u of the difference = mean difference in __ for all __. we do/(do not) have enough evidence at the 0.05 level to conclude that the mean difference in __ for all __ is __.
inference for regression (conditions)
linear: true relationship between the variables is linear independent: 10% condition if sampling without replacement normal: responses vary normally around the regression line for all x-values equal variance: around the regression line for all x-values random: data from a random sample or randomized experiment
mean and standard deviation of a discrete random variable
mean (expected value): ux= €xipi (multiply and add across the table) standard deviation: ox=(€(xi-ux)pi)^.5 (square root of the sum of (each x value-the mean)^2(its probability)
mean and standard deviation of a difference of two random variables
mean of a difference of 2 RVs: ux-y = ux-uy stdev of a difference of 2 independent RVs: ox-y=(ox^2 + oy^2)^.5 stdev of a difference of 2 dependent RVs: cannot be determined because it depends on how strongly they are correlated
mean and standard deviation of a sum of two random variables
mean of a sum of 2 RVs: ux+y= ux +uy stdev of a sum of 2 independent RVs: ox+y= (ox^2 +oy^2)^.5 stdev of a sum 2 dependent RVs: cannot be determined because it depends on how strongly they are correlated
mean and standard deviation of a binomial random variable
mean: ux= np standard deviation: ox= (np(1-p))^.5
using normalcdf and invnorm (calculator tips)
normalcdf(min, max, mean, standard deviation) invnorm(area to the left as a decimal, mean, standard deviation)
inference for counts (chi-square tests)(conditions)
random: data from random sample(s) or randomized experiment large sample size: all expected counts are at least 5 independent: independent observations and independent samples/groups; 10% condition if sampling without replacement
inference for proportions (conclusion)
random: data from random sample(s) or randomized experiment normal: at least 10 successes and failures (in both groups, for a two sample problem) independent: independent observations and independent samples/groups; 10% condition if sampling without replacement
inference for means (conditions)
random: data from random sample(s) or randomized experiment normal: population distribution is normal or large sample(s) (n1> or=30 or n1> or =30 and n2> or =30) independent: independent observations and independent samples/groups; 10% condition if sampling without replacement
interpreting expected value/mean
the mean/expected value of a random variable is the long-run average outcome of a random phenomenon carried out a very large number of times
interpreting probability
the probability of any outcome of a random phenomenon is the proportion of times the outcome would occur in a very long series of repetitions. probability is a long-term relative frequency.
bias
the systematic favoring of certain outcomes due to flawed sample selection, poor question wording, undercoverage, nonresponse, etc.
complimentary events
two mutually exclusive events whose union is the sample space
type 1 error, type 2 error and power
type 1: rejecting the null when the null is actually true type 2: failing to reject the null when the null should be rejected power: probability of rejecting the null when the null should be rejected
extrapolation
using a LSRL to predict outside the domain of the explanatory variable (can lead to ridiculous conclusions if the current linear trend does not continue)
carrying out a two-sided test from a confidence interval
we do/(do not) have enough evidence to reject the null hypothesis in favor of the alternative hypothesis at the a=0.05 level because __ falls outside/(inside) the 95% CI. a=1- confidence level
why large samples give more trustworthy results..(when collected appropriately)
when collected appropriately, large samples yield more precise results than small samples because im a large sample the values of the sample statistic tend to be closer to the true population parameter
what is an outlier?
when given 1 variable data: an outlier is any value that falls more than 1.5(IQR) above q3 or below q1 regression outlier: any value that falls outside the patter of the rest of the data
interpret LSRL y-intercept "a"
when the x variable (context) is zero, the y variable (contest) is estimated to be __(put value here)
interpret LSRL "y hat"
y hat is the "estimated" or "predicted" y-value (context) for a given x-value (context)
interpret a z-score
z=(value - mean)/(standard deviation) a z-score describes how many standard deviations a value or statistic (x, x bar, p hat, etc.) falls away from the mean of the distribution and in what direction. The further the z-score is away from zero the more "surprising" the value of the statistic is.