Chapter 3

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Chi-Square Calculating Expected Values

P(event) * N(number of trials) refer to picture if we want to find the expected value for a's cell: (a + c) * [(a + b) / N] you take the total column multiplied by the total row divided by the number of trials

ANOVA: When There Are Two Independent Variables: The Two-Factor ANOVA

a factor is one of the variables and a factor can have multiple levels there may be situation where have a two-way, or two-factor, ANOVA we are interested in comparing the x means among the different levels

Confidence Interval

establishes a range and specifies the probability that this range encompasses the true population parameter

Calculating the SE of the Difference between Means

first, we get the pooled estimate of the SD

Determining Association

first, we must quantify both variables one of the most common measures of association is the correlation coefficient, r

Fisher's Test

gives the exact probability of getting a table with values like those obtained or even more extreme the calculations are unwieldy

Generalized Estimating Equations

technique for regressions on correlated data, such as longitudinal studies

Z-Score

tells us how many SDs from the mean a particular x score is the probability of an x value falling above or below a value equals how much respective area is under the curve a distance between some value and its mean divided by an appropriate standard error

tells you what proportion of the variance in one variable is explained by the other variable

Median

that value above which 50% of the other values lie and below which 50% of the values lie it is the middle value or the 50th percentile

CIs around the Difference between Two Means

the 95% confidence limits around the difference between means are given by: (see picture) this is the difference between two sample means, plus or minus an appropriate t value, time the SE of the difference df is the degrees of freedom and .95 says that we look up the t value that pertains to those degrees of freedom to .95 probability the degrees of freedom when we are looking at two samples are: nₓ + nᵧ - 2 this is because we have lost one degree of freedom for each of the two means we have calculated, so our total degrees of freedom is: (nₓ - 1) + (nᵧ - 1) = nₓ + nᵧ - 2

Kruskal-Wallis Test to Compare Several Groups

the analysis of variance is valid when the variable of interest is continuous, comes from a normal distribution, that is, the familiar bell-shaped curve, and the variances within each of the groups being compared are essentially equal often, however, we must deal with situations when we want to compare several groups on a variable that does not meet all of the above conditions this might be a case where we can say one person is better than another, but we can't say exactly how much better in such a case, we would rank people and compare the groups by using the Kruskal-Wallis test to determine if it is likely that all the groups come from a common population this test is analogous to the one-way analysis of variance, but instead of using the original scores, it uses the rankings of the scores it is called a nonparametric test

How high is high with regard to r?

the answer to this question depends upon the field of application as well as on many other factors, including precision of measurement, as well as the fact that there may be different relationships between two variables in different groups of people

Analysis of Variance: Comparison Among Several Groups

the appropriate technique for analyzing continuous variables when there are three or more groups to be compared is the analysis of variance, commonly referred to as ANOVA

the Connection between Linear Regression and the Correlation Coefficient

the correlation coefficient and the slope of the linear regression line are related by the formulas: (see picture) where sₓ is the SD of the x variable, sᵧ is the SD of the y variable, b is the slope of the line, and r is the correlation coefficient

Determining the Conclusion of a Z-Test for Comparing Two Proportions

the critical value is the minimum value of the test statistics that we must get in order to reject the NH at a given level of significance if the Z-value is greater than this critical value, then we reject the NH

t Distribution

the distribution of t has been tabulated, and from the tables, we can obtain the probability of getting a value of t as large as the one we actually obtained under the assumption that our null hypothesis (of no difference between means) is true if this probability is small (i.e., if it is unlikely that by chance alone we would get a value of t that large if the null hypothesis were true), we would reject the null hypothesis and accept the alternate hypothesis that there really is a difference between the means of the populations from which we have drawn the two samples

Least Squares Linear Regression

the dᵢs are distances from the points to the line it is the sum of these squared distances that is smaller for this line than it would be for any other line we might draw

Assumptions of the ANOVA

the extent of the variability of individuals within groups is the same for each of the groups, so we can pool the estimates of the individual within group variances to obtain a more reliable estimate of overall within-groups variance if there is as much variability of individuals within the groups as there is variability of means between the groups, the means probably come from the same population, which would be consistent with the hypothesis of no true difference among means, that is, we could not reject the null hypothesis of no difference among means

Bell Curve Distribution

68% within 1 SD 95% within 2 SD 99% within 3 SD

Size of the Correlation Coefficient and Statistical Significance

A very low correlation coefficient may be statistically significant if the sample size is large enough what seems to be a high correlation coefficient may not reach statistical significance if the sample size is too low statistical significance tells you that the correlation you observed is not likely due to chance, but does not tell you anything about the strength of the association

Z-Test for Comparing Two Proportions

H₀: P₁ = P₂ or P₁ - P₂ = 0 Hₐ: P₁ ≠ P₂ or P₁ - P₂ ≠ 0 P₁ = treated population in category A P₂ = control population in category A

Performing a t-Test

H₀: mA = mB Hₐ: mA ≠ mB

Comparison between Two Groups

a most common problem that arises is the need to compare two groups on some dimension

Philosphical statistics

a reflection of life in two important respects - as in life, we can never be certain of anything but in statistics we can put a probability figure describing the degree of our uncertainty) - all is a trade-off—in statistics, between certainty and precision or between two kinds of error; in life, well, fill in your own trade-offs

Kappa Statistic

a statistic that tells us the extent of the agreement between the two experts above and beyond chance agreement where the proportion of agreement by chance alone is row total * column total / N do this ^ for both cells a and d add these two numbers together and divide by N this ^ is the proportion of agreement expected by chance alone proportion of observed agreement is simply (a + d) / N the Kappa statistic you end up with should be between 0 and 1, and the closer it is to 1, the stronger the agreement

Chi-Square for 2x2 Tables

a statistical method to determine whether the results of an experiment may arise by chance or not null hypothesis: there is no difference alternative hypothesis: there is a difference/higher/lower we record our data in a contingency table where each possibility is classified in each cell of the table

Standard Deviation

a type of measure related to the average distance of the scores from their mean value population SD is denoted as σ the square of the SD is the variance - population variance is denoted as σ² if we are estimating from a sample and if there are a large number of observations, the standard deviation can be estimated from the range of the data - dividing the range by 6 provides a rough estimate of the standard deviation (6 SDs cover 99% of the data) SD is very useful in deciding whether a laboratory finding is normal, in the sense of "healthy" - generally, a value that is more than 2 SD away from the mean is suspect, and perhaps further tests need to be carried out

Dichotomous Variable

a variable that only has two outcomes usually a yes or no question quantitated as 1 and 0

Applications of Correlation Coefficient (r)

apply when the basic relationship between these two variables is linear when the variables are continuous, the calculated correlation is the Pearson product-moment correlation - parametric if the variables are ranked and ordered according to rank, we calculate the Spearman rank-order correlation - nonparametric

What We're Asking with Chi-Square Tests

are the two categories of classification independent? if they are independent, what frequencies would we expect in each of the cells? how different are our observed frequencies from the expected ones? how do we measure the size of difference?

Z-Score for Difference between Two Means

but mA - mB is commonly hypothesized to be 0

t-Test for the Difference between Means of Two Independent Groups: Principles

comparing two groups when the measure of interest is a continuous variable H₀: the mean of the population from sample A is the same as sample B Hₐ: the mean of the two populations are different we have drawn samples on the basis of which we will make inferences about the populations from which they came we are subject to the same kinds of type I and type II errors

Causal Pathways

if we do get a significant correlation, we then ask what situations could be responsible for it the existence of a correlation between two variables does not necessarily imply causation correlations may arise because one variable is the partial cause of another or the two correlated variables have a common cause other factors, such as sampling, the variation in the two populations, and so on, affect the size of the correlation coefficient also thus, care must be taken in interpreting these coefficients

Confidence Interval

if we were to take a large number of samples from the population and calculate the 95% confidence limits for each of them, 95% of the intervals bound by these limits would contain the true population mean however, 5% would not contain it of course, in real life we only take one sample and construct confidence intervals from it we can never be sure whether the interval calculated from our particular sample is one of the 5% of such intervals that do not contain the population mean the most we can say is that we are 95% confident it does contain it if an infinite number of independent random samples were drawn from the population of interest (with replacement), then 95% of the confidence intervals calculated from the samples (with mean x̅ and SE) will encompass the true population mean m

Standard Error of the Mean (SE)

if we were took take many, many samples and come up with many sample means, the sample means would have a normal distribution where most would be relatively close to the true population mean - called the sampling distribution this sample means' distribution's standard deviation is called the standard error

Matched Pair t-Test

if you have a situation where the scores in one group correlate with the scores in the other group, you cannot use the regular t-test since that assumes the two groups are independent this situation arises when you take two measures on the same individual H₀: mean difference = 0 Hₐ: mean difference ≠ 0

Yates' Correction

in actual usage, a correction is applied for 2x2 tables the calculation is done using the formula: (see picture) the chi-square test should not be used if the numbers in the cells are too small when the total N is greater than 40, use the chi-square test with Yates' correction when the total N is between 20 and 40 and the expected frequency is each of the four cells is 5 or more, use the corrected chi-square test if the smallest expected frequency is less than 5, or if N is less than 20, use the Fisher's test

Pooled Estimate of the Variance

in some cases, we know or assume that the variances of the two populations are equal to each other and that the variances that we calculate from the samples we have drawn are both estimates of a common variance in such a situation, we would want to pool these estimates to get a better estimate of the common variance we denote this pooled estimate as s²pooled = s²p, and we calculate the standard error of the difference between means as (see picture)

Interaction between Two Independent Variables

interaction between two independent variables refers to differences in the effect of one variable, depending on the level of the second variable there may not be a significant effect when all groups are lumped together, but if we look at the effects separately for each group, we may discover an interaction between the two factors

Summary So Far

investigation of a scientific issue often requires statistical analysis, especially where there is variability with respect to the characteristics of interest the variability may arise from two sources: - the characteristic may be inherently variable in the population and/or there may be error of measurement we have pointed out that in order to evaluate a program or a drug, to compare two groups on some characteristic, and to conduct a scientific investigation of any issue, it is necessary to quantify the variables

Regression Lines

lines that seem to best fit the data points have the form Y = a + bX - Y is the dependent variable - X is the independent variable - Y is a function of X - a is the intercept, where the line crosses the Y axis - b is the slope, the rate of change in Y for a unit change in X if the slope is 0, it means we have a straight line parallel to the x axis - it also means that we cannot predict Y from a knowledge of X since there is no relationship between Y and X if we have a perfect correlation where all the data points fit perfectly on the regression line, we know exactly how Y changes when X changes and we can perfectly predict Y from a knowledge of X with no error if we have a pattern, like we can see that as X increases Y increases or vice versa, then we can't predict Y perfectly because the points are scattered around the line we have drawn - we can, however, put confidence limits around our prediction, but first we must determine the form of the line we should draw through the points we must estimate the values for the intercept and slope - this is done by finding the best-fit line

t Distribution

looks like a normal Z distribution each t distribution is different depending on the sample size as the sample size approaches infinity, the t distribution becomes exactly like the Z distribution but the differences between Z and t get larger as the sample size gets smaller it's always sage to use the t distribution

Standardized Normal Distribution

mean = 0 SD = 1 total area under the curve = 1

Certainty vs. Pin Down in CIs

note that for a given sample size, we trade off degree of certainty for size of the interval we can be more certain that our true mean lies within a wider range, but if we want to pin down the range more precisely, we are less certain about it to achieve more precision and maintain a high probability of being correct in estimating the range, it is necessary to increase the sample size the main point here is that when you report a sample mean as an estimate of a population mean, it is most desirable to report the confidence limits

Correlation Coefficient (r)

number derived from data can vary between -1 and 1 - r = 0 means no correlation - r = ± 1 means a perfect correlation --- allows us to predict values --- only happens in deterministic models if both go in the same direction, there's a positive correlation; if they go in opposite directions, there's a negative correlation

Assumptions of Linear Regression Models

observations are independent of each other - they are uncorrelated - longitudinal studies are a different story because the points of observation are always correlated

Degrees of Freedom

related to sample size if we calculate the mean of a sample of, say, three values, we would have the "freedom" to vary two of them any way we liked after knowing what the mean is, but the third must be fixed in order to arrive at the given mean so we only have 2 "degrees of freedom" based on the number of parameters estimated

Mean Square

sum of squares divided by the degree of freedom for between-groups, it is the variation of the group means around the grand mean, while for within-groups, it is the pooled estimate of the variation of the individual scores around their respective group means the within-groups mean square is also called the error mean square an important point is that the square root of the error mean square is the pooled estimate of the within-groups standard deviation

t Statistic

suppose we are interested in sample means and we want to calculate a Z score we don't know what the population standard deviation is, but if our samples are very large, we can get a good estimate of σ by calculating the SD from our sample, and then getting the SE as usual: SE = SD / sqrt(n) but often our sample is not large enough we can still get a standardized score by calculating a value called student's t: (see picture) it looks just like Z - the only difference is that we calculate it from the sample and it is a small sample

General Approach to t-Test for the Difference between Means of Two Independent Groups: Principles

the general approach is as follows we know there is variability of the scores in group A around the mean for group A and within group B around the mean for group B, simply because even within a given population, people vary what we want to find is whether the variability between the two sample means around the grand mean of all the scores is greater than the variability of the two sample means within the groups around their own means if there is as much variability within the groups as between the groups, then they probably come from the same population t = the difference between the two sample means divided by an appropriate standard error (SE of the difference between two means)

Best-Fit Line

the line that fits the points best has the following characteristics: - if we take each of the data points and calculate its vertical distance from the line and then square that distance, the sum of those squared distances will be smaller than the sum of such squared distances from any other line we might draw --- this is called the least-squares fit

Variability

the mean does not provide an adequate description of a population what we also need is some measure of variability two groups can have the same mean but be very different the most commonly used index of variability is the standard deviation

Z-Score for Sample Means

the numerator is the distance of the sample mean from the population mean, and the denominator is the SE

Confidence Limits

the outer boundaries that we calculate and about which we can say: we are 95% confident that these boundaries or limits include the true population parameter the interval between these limits called the confidence interval

F Ratio

the ratio of the between-groups variance to the within-groups variance values of the F distribution appear in tables in many statistical texts, and if the obtained value from our experiment is greater than the critical value that is tabled, we can then reject the hypothesis of no difference rarely calculated by hand this doesn't specify where the difference lies; it just tells us that somewhere, there is a difference - we can handle this by the Bonferroni procedure

Standard Deviation vs. Standard Error

the standard deviation is used to describe the dispersion or variability of the scores the standard error is used to draw inferences about the population mean from which we have a sample

Standard Error of the Difference between Two Means

the standard deviation of a theoretical distribution of differences between two means if we take two samples (possibly of different populations (men vs. women, ages, etc.)) and calculate each sample's mean, find the difference of the means, and do this many, many times, we'll create a distribution of differences these difference scores would be normally distributed their mean would be the true average difference between the populations the standard deviation of this distribution is called the standard error of the differences between two means (where σ² = variance)

Mode

the value that occurs with the greatest frequency

Between-Groups Variance

the variability of the group means around the grand mean of all the data

Sampling Distribution

theoretical construct is normal the mean of these sample means is equal to the true population mean (m) its SD is equal to the SD of the original individual values divided by the square root of the number of people in the sample

Critical Values of F

there are different critical values of F, depending on how many groups are compared and on how many scores there are in each group to read the tables of F, one must know the two values of degrees of freedom (df) the df corresponding to the between-groups variance, which is the numerator of the F ratio, is equal to k - 1, where k is the number of groups the df corresponding to the denominator of the F ratio, which is the within-groups variance, is equal to k (n - 1), that is, the number of groups times the number of scores in each group minus one

Bonferroni Procedure

this is one way to handle the problem of multiple comparisons implies that if we take x comparisons, the probability that none of the five p values falls below .05 is at least 1 - (x * .05) = y when the null hypothesis of equal means is really true that means there is a probability of up to y that at least one p value will reach the .05 significance even by chance alone even if the treatments really do not differ to get around this, we divide the chosen overall significance level by the number of two-way comparisons to be made, consider this value to be the significance level for any single comparison, and reject the null hypothesis of no difference only if it achieves this new significance level does not require a prior F test p values should be reported so that the informed reader may evaluate the evidence

Correlation Is Not Causation

this is the most important point and one that is often ignored correlation tells you nothing about the directionality of the relationship, nor about whether there is some third factor influencing both variables

"Approximately"

to achieve .95 probability, you don't multiply the SE by 2 exactly; we rounded it for convenience the exact multiplying factor depends on how large the sample is and the degrees of freedom - can be looked up in t tables

Standard Error of a Proportion

to calculate SEp, we must first calculate the standard deviation of a proportion and divide it by the square root of n for 99% confidence limits, we would multiply the standard error of a proportion by 2.58, to get the interval .59 to .80 the multiplier is the Z value that corresponds to .95 for 95% confidence limits or .99 probability for 99% confidence limits

Chi-Square Is the difference by chance?

to determine this, we calculate a value called a chi-square (χ2) this is obtained by taking the observed value in each cell when this is done for each cell, the four resulting quantities are added together to give a number called a chi-square O is the observed frequency and e is the expected frequency this number is then a statistic that has a known distribution based on the distribution, we know how likely it is that we could have obtained a value as large as the one that we actually obtained strictly by chance, under the assumption the hypothesis of independence is the correct one referring to the chi-square table, the value of chi-square that must be obtained from the data in order to be significant is called the critical value

Purpose of Statistics

to draw inferences from samples of data to the population from which these samples came we are specifically interested in estimating the true parameter of a population for which we have a sample statistic

Principles Underlying ANOVAs

under the null hypothesis, we would have the following situation: there would be one big population and if we picked samples of a given size from that population, we would have a bunch of sample means that would vary due to chance around the grand mean of the whole population if it turns out they vary around the grand mean more than we would expect just by chance alone, then perhaps something other than chance is operating perhaps they don't all come from the same population perhaps something distinguishes the groups we have picked we would then reject the null hypothesis that all the means are equal and conclude the means are different from each other by more than just chance essentially, we want to know if the variability of all the group means is substantially greater than the variability within each of the groups around their own mean we calculate a quantity known as the between-groups variance and the within-groups variance

ANOVA Table and "Error" Row

unexplained variance

Nonparametric Statistic

used when the data do not have to be normally distributed and are ordinal (you assign numbers to the ordered categories, usually)

McNemar Test

used when we have a before-and-after situation or a multiple-judges situation - these situations pertain to the individual, so they are not independent and chi-square/Fisher's tests aren't appropriate null hypothesis: there is no difference/no disagreement to calculate this, we do a special kind of chi-square: (see picture) this test does not tell us about the strength of the agreement - for this, we have to use a statistic called Kappa

Within-Group Variance

variability of the scores within each group around its own mean

Summary So Far - Variables

variables may be quantified as discrete or as continuous, and there are appropriate statistical techniques that deal with each of these we have considered here the chi-square test, confidence intervals, Z-test, t-test, analysis of variance, correlation, and regression we have pointed out that in hypothesis testing, we are subject to two kinds of errors: the error of rejecting a hypothesis when in fact it is true and the error of accepting a hypothesis when in fact it is false the aim of a well-designed study is to minimize the probability of making these types of errors statistics will not substitute for good experimental design, but it is a necessary tool to evaluate scientific evidence obtained from well-designed studies

Sample Values and Population Values Revisited

we are always interested in estimating population values from samples in some of the formulas and terms, we use population values as if we knew what the population values really are - we of course don't know the actual population values, but if we have very large samples, we can estimate them quite well from our sample data for practical purposes, we will generally use and refer to techniques appropriate for small samples, since that is more common and safer (i.e., it doesn't hurt even if we have large samples)

Confidence Intervals for Proportions

we can construct a confidence interval around a proportion in a similar way to constructing confidence intervals around means the 95% confidence limits for a proportion are p ± 1.96 SEp, where SEp is the standard error of a proportion

Wilcoxon Matched-Pairs Rank Sums Test

when the actual difference between matched pairs is not in itself a meaningful number but the researcher can rank the difference scores (as being larger or smaller for given pairs), the appropriate test is the Wilcoxon matched-pairs rank sums test - nonparametric

When Not to Do a Lot of t-Tests: the Problem of Multiple Tests of Significance

when there are three or more group means to be compared, the t-test is not appropriate if we do multiple t-tests, we're increasing the chance of a type 1 error by compounding the significance level with each test thus, the overall probability of a type I error would be considerably greater than the .05 we specified as desirable in actual fact, the numbers are a little different because the three comparisons are not independent events, since the same groups are used in more than one comparison, so combining probabilities in this situation would not involve the simple multiplication rule for the joint occurrence of independent events

Multiple Linear Regression

when we have two or more independent variables and a continuous dependent variable, we can use multiple regression analysis the form this takes is: (see picture) we can have as many variables as appropriate, where the last variable is the kth variable the bᵢs are regression coefficients

Central Tendency

when we wish to describe a population with regard to some characteristic, we generally use the mean or average as an index of central tendency of the data true population mean is m the sample mean is x̅ other measures of central tendency are the median and mode

SE of the Difference between Two Proportions

where p̂ and q̂ are pooled estimates based on both treated and control group data

95% CI

≈ ± 2 SE

99% CI

≈ ± 3 SE


संबंधित स्टडी सेट्स

informatics/Health Information Technology PrepU

View Set

Structural Classification of Joints

View Set

Lesson 110 - Voltage Drop, Cable, Conduit, and Tubing Homework

View Set

Business Ethics: Ethical Decision Making and Cases: Chapter 2: Stakeholder Relationships, Social Responsibility, and Corporate Governance

View Set

Gestion de Projet - L4 - Le plan de management de projet

View Set

Unit 6: Industrial Revolution and Imperialism in Asia

View Set