1.4C QM Lecture 5 (ch. 11 + 17)

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Chi2 test

Chi2 test is a test we use to see whether two variables are associated with one another. Used for categorical variables, in particular for nominal variables Any test between frequencies or proportions of mutually exclusive categories (such as For, Maybe, and Against) requires the use of chi-square. If the obtained value is more extreme than the critical value, the null hypothesis is rejected. If the obtained value does not exceed the critical value, the null hypothesis is the most attractive explanation.

Key points for chi2 test

- Always for nominal variables - Because its for nominal variables, we can only say something about whether or not there is independence. We cannot say anything about the direction of the association - Can never make causal claims

Assumptions of one-sample t-tests

- Appropriate for continuous variables or ordinal data - But if we are working with ordinal data, we are going to interpret those ordinal data as if they were interval variables, this requires an additional assumption that the steps between the categories are exactly the same - Our variable must be normally distributed - And our sample must have been conducted through random sampling - Need to have a large enough sample size, a sample size above 30 (minimum sample size for the central limit theorem to hold)

Assumptions of chi2 test

- The observations (so the individuals or cases) must be independent Ø Eg, in your sample ,if some of the respondents live together and eat together, whether or not they eat veg is interdependent between those 2 respondents because they eat together - Also important that the sample size in each cell of your cross-tab is sufficiently large. Ø Watch out for cells with less than 5 observations Ø Rule of thumb is a maximum of 20% of your cells can have a frequency of less than 5.

So How Do I Interpret χ2(2) = 20.6, p < .05?

- χ2 represents the test statistic. - (2) is the number of degrees of freedom. - 20.6 is the value obtained by using the formula we showed you earlier in the chapter. - p < .05 (the really important part of this little phrase) indicates that the probability is less than 5% on any one test of the null hypothesis that the frequency of votes is equally distributed across all categories by chance alone. Because we defined .05 as our criterion to deem the research hypothesis more attractive than the null hypothesis, our conclusion is that there is a significant difference among the three sets of scores.

the two t-tests we need to know and the assumption made

. ØOne-sample t-test: to determine whether an unknown population mean is different from a specific value. (one number) (How likely is it to find a certain sample mean, IF the population mean would be equal to some value?) ØIndependent samples t-test: Comparing means of two groups (How likely is it that we find some mean difference in our sample, IF the mean difference in the population is zero?) For all of these tests, we make an assumption: Your data must be ~Normally distributed!

Steps to calculate chi2-test

1. Calculate expected frequencies 2. Calculate Chi2 test statistic 3. Calculate degrees of freedom (df) 4. Find critical value in table with correct df [1. Expected = (marginal frequency x marginal frequency) / total frequency] [df = (row - 1)(column - 1)] Test statistic exceeds critical value = reject H0

Assumptions for an independent samples t-test

1. Continuous or ordinal data (N.B. for ordinal variables, we will interpret them as if they were interval variables. This requires an additional assumption that the steps between categories are the same size) 2. Normally distributed 3. Random sampling 4. Large sample size (20+ per group) 5. Variances (s2) of the groups must be equal in the population This last assumption rarely holds Solution: SPSS automatically provides a test for this assumption: Levene's test, and a correction (penalty) if s2 is not equal

Steps for a one-sample t-test

1. Formulate hypothesis, now about the unobserved population 2. Calculate test statistic, based on the sampling distribution ( t = [sample mean - popn mean] / standard error) 3. Find the critical value for your alpha (first finding degrees of freedom) 4. Compare the test statistic with the critical value and draw conclusion df: minus one from the sample size for every number you estimate --> so - 1 for Standard Error estimated

Assumptions for a chi2 test

1. Observations must be independent (e.g., if some students live together and have a car together, they are dependent) 2. The sample size is sufficiently large: Watch out for cells < 5 - Rule of thumb: Max. 20% of cells can have a frequency < 5

Central Limit Theorem

1. The mean of the sampling distribution is approximately equal to the population mean (or proportion) M~ 𝜇 2. The sampling distribution is Normal 3. The standard error is the standard deviation of the sampling distribution - Estimate because we only have one sample With this can do inferential stats ( Sampling distribution - distribution of every possible sample we could draw from this popn - hypothetical distribution) (SE - standard deviation of the sampD, avg deviation of all the observations in the SampD from the popn mean/sampD mean)

Degrees of Freedom (df)

A value, which is different for different statistical tests, that approximates the sample size of number of individual cells in an experimental design. Df are the number of independent pieces of information you have to make inferences about the population based on your sample. (video clip)

Chi2 - two-tailed test or one-tailed test?

Chi2 can only be non-directional hypothesis because you are only looking for association or no association So two-tailed.

Is it easier to reject the null hypothesis with a directional or non-directional hypothesis?

Critical value is less extreme for one-sided test because 5% is in one tail Easier to reject one-sided test (directional) In practice, often test non-directional H because: - they are more conservative; - if you can reject the two-sided test, you can also draw a conclusion about the direction of the association; - one-tailed test might raise suspicion that you are tailored your research hypothesis to fit your data. I(f you can reject two-sided test, you automatically know value is more extreme than one-sided test. Definitely out in the tail for one-sided test)

Degrees of freedom for a chi2 test

Df = (row -1)(column -1)

Assumptions for a two-samples t-test

Groups are independent of each other (two-samples t-test) Assumptions Continuous or ordinal data N.B. for ordinal variables, interpret it as interval variable, required an additional assumption that steps between categories are the same. Can't with proportions Variables have to be normally distributed Random sampling Large sample size (20+ per group) (clt → minimum 30, but for two-samples t-test, additional assumption that we have at least 20 people in each group) Variances of the groups must be equal in the popn (variability for men = variability for women)

Choosing a significance level

How conservative should we be? ØWhen in doubt, follow the norm: α = 0.05 But... it could be that we do not have enough power to find an effect that exists in the population - Our sample was not big enough - The effect is too small ØLess conservative: α = 0.1 Or... maybe we have huge effects or huge samples and we want to show that we are EXTRA confident about our conclusions ØMore conservative: α = 0.01 (or even α = 0.001!)

Difference between a non-directional and directional hypothesis for a t-test

Non-directional: alpha = 0.05 is split between both tails of the probability distribution (2.5% in both ends) Directional - only interested in values on one side of the distribution, 5% of the observations out in the one tail your testing for

When do we use the t-test?

One group: One sample t-test: How likely is it to find a certain sample mean, IF the population mean would be equal to some value? Two groups: Two independent sample t-test... 2 variables: Nominal and ordinal (can also be chi2) Nominal and interval Nominal and ratio (One- or Two-tailed test, depending on the hypothesis and how conservative we want to be about our conclusions.)

Significance vs. importance

People are often hunting for significant effects ("p-hacking"). Problem: An effect does not need to be IMPORTANT to be significant. With a very large sample, almost every difference will be significant, even tiny, unimportant ones. Solutions: 1. Report effect size (e.g. how big is the difference between your sample and the hypothetical mean test mark?), so people know if the significant effect is also important 2. Always conduct two-tailed tests 3. Use a stricter α-level (e.g. α = 0.01 or α = 0.001) Don't lose sight of importance Significance isn't the end all Important to think of effect size Sample only found a difference of 0.3 marks, may be significant but is it important?

why does the t-distribution only give critical values?

Remember there is a different t-distribution for each number of degrees of freedom that we can have That means that if we would like to have a table like our z-table, we are going to need one for every single number of df A more efficient way to present the info from the t-distribution is to just give the critical values Just give the values that correspond to different key alpha levels often used in social sciences

One-tailed test (directional hypothesis) in SPSS

SPSS always reports p-value for a two-tailed test so if you have a directional hypothesis, divide p-value by 2

Does spss tell us the critical value for a chi2 test?

SPSS doesn't tell us the critical value, only the p-value

Shape of t-distribution and sample size

Shape of t-distribution depends on sample size As sample size gets bigger, the t-distribution looks more like standard normal distribution (as sample looks more like the population) Each degree of freedom is a different t-distribution As df go up, the critical value starts to get closer to the centre of the distribution, becomes easier to reject the H0, for both one and two-sided tests

Cross-tabulation

Tables that show the frequencies for two or more variables Allow us to compare distributions of these variables and check the relationships 3 sets of info: - Cell frequencies - Marginal frequencies (total up cell frequencies by row and column) - Total frequencies

When can we use a chi2-test?

Variables: Nominal and Nominal Nominal and Ordinal ( or t-test) Ordinal and ordinal (a bit confused but yeah)

Estimate standard error

We aren't getting the standard error exactly, we have to estimate it. Because we never know the SE (the sd of the sampD) Have to apply a penalty to our calculations, in order to account for the fact that there is greater uncertainty when we only know the sample stats and nothing about the popn This penalty is going to come in the shape of the normal distribution that we are going to use to figure out how unlikely it is that we drew this sample When we were doing z-scores and had full info about the shape of the distribution we used to calculate the probabilities, we had a standard normal distribution For this t-test, we are going to use a "penalized" distribution - This distribution is called a t-distribution - The bigger your sample, the closer the t-distribution gets to the normal distribution Remember, as our sample size increases, the sample distribution starts to look like the population distribution

Degrees of freedom for independent samples t-tests

We have two samples, and TWO estimates of the variance (one for each sample); each "costs" you one degree of freedom. So for both samples we calculate df = n - 1 We then add the df from the two groups = df1 + df2 df = (n1 -1) + (n2 -1)

what does p < α and p > α mean? ( α = 0.05)

p < α so reject H0, Research hypothesis is more plausible. Significant p > α so fail to reject H0, null hypothesis is more plausible

Reporting t-test results

t(df) = t-value, p = p-value (usually this one) or t(df) = t-value, p </=/> α if SPSS output shows p-value as "0.000" then report it as p < 0.001 , because we never have 0 chance. Chance is so small that it rounds down to 0

Calculating expected frequencies in chi2 test

the expected value for any cell is the product of the row total multiplied by the column total divided by the total sum of all observations. (salkind) So expected count is 100/200 (no of ppl who voted for animal party divided by total no of people in cross tab) x 100/200 ( no of people who eat veg divided by total) x total no of people in the cross tab (video clip eg)

critical value in t-test

the t-value that matches the alpha level that we are interested in (Compare our test statistic ( that we calculated) with a value of t corresponding to an area in the tail that is equal to alpha)

When do we use inferential statistics?

when the population parameters are unknown • In most cases, population parameters μ, 𝜎 are unobserved (popn mean and sd) • We only have one sample and its statistics, X bar and s... (sample mean and sd) ... from these we must infer (or guess) about the population. Given our sample, what do we think the population looks like?

Z-scores and the normal distribution - why do we use these? What can we do with them? When can we use them?

• Why do we use these? - To understand any data point's position in the distribution • What can we do with them? - To compare the position data points in different distributions o Remember: student in UK vs. student in Argentina (Lecture 2) - To know how common a value is (The 'Empirical Rule,' Lecture 2 & 3) • When can we use them? - When we have an approximately normally distributed variable - Descriptive statistics (sample; population, if population parameters (μ, 𝜎) are known!)


Ensembles d'études connexes

CHAPTER 2: Nature & Effects of Obligation

View Set

Econ test 2, chapter 5, 7,8,9,12

View Set

RN Adult Med surg online Practice A

View Set

Med Term: Female Reproductive System ch.17 part2

View Set

Ch. 15 Acute Respiratory Failure

View Set