Quantitative Statistics
Z-test v T-test
1. Z-test is a statistical hypothesis test that follows a normal distribution while T-test follows a Student's T-distribution. 2. A T-test is appropriate when you are handling small samples (n < 30) while a Z-test is appropriate when you are handling moderate to large samples (n > 30). 3. T-test is more adaptable than Z-test since Z-test will often require certain conditions to be reliable. Additionally, T-test has many methods that will suit any need. 4. T-tests are more commonly used than Z-tests. 5. Z-tests are preferred than T-tests when standard deviations are known.
Effect Size
A measure of practical significance - how much impact does your variable have? How useful is this difference? Or this impact of variable one or variable two? (Cohen's D or Eta Squared) Effect size is a statistical concept that measures the strength of the relationship between two variables on a numeric scale. For instance, if we have data on the height of men and women and we notice that, on average, men are taller than women, the difference between the height of men and the height of women is known as the effect size. The greater the effect size, the greater the height difference between men and women will be. Statistic effect size helps us in determining if the difference is real or if it is due to a change of factors. In hypothesis testing, effect size, power, sample size, and critical significance level are related to each other. In Meta-analysis, effect size is concerned with different studies and then combines all the studies into single analysis. In statistics analysis, the effect size is usually measured in three ways: (1) standardized mean difference, (2) odd ratio, (3) correlation coefficient.
One-way ANOVA
A one-way ANOVA refers to the number of independent variables--not the number of categories in each variables. A one-way ANOVA has just one independent variable. For example, difference in IQ can be assessed by Country, and County can have 2, 20, or more different Countries in that variable.
Sample Size
A sample size is a part of the population chosen for a survey or experiment. For example, you might take a survey of dog owner's brand preferences. You won't want to survey all the millions of dog owners in the country (either because it's too expensive or time consuming), so you take a sample size. That may be several thousand owners. The sample size is a representation of all dog owner's brand preferences. If you choose your sample wisely, it will be a good representation.
Population
A set of all the individuals of interest in a particular study
Sample
A set of individuals selected from a population, important to obtain a sample that is representative of the population!
T-Test
A t-test is used for testing the mean of one population against a standard or comparing the means of two populations if you do not know the populations' standard deviation and when you have a limited sample (n < 30). If you know the populations' standard deviation, you may use a z-test. Example:Measuring the average diameter of shafts from a certain machine when you have a small sample.
Two-way ANOVA
A two-way ANOVA refers to an ANOVA using 2 independent variable. A 2-way ANOVA can examine differences in IQ scores (the dependent variable) by Country (independent variable 1) and Gender (independent variable 2). Two-way ANOVA's can be used to examine the INTERACTION between the two independent variables. Interactions indicate that differences are not uniform across all categories of the independent variables. For example, females may have higher IQ scores overall compared to males, and are much much greater in European Countries compared to North American Countries. Two-way ANOVAs are also called factorial ANOVA. Factorial ANOVAs can be balanced (have the same number of participants in each group) or unbalanced (having different number of participants in each group). Not having equal size groups can make it appear that there is an effect when this may not be the case. There are several procedures a researcher can do in order to solve this problem:
Z-test
A z-test is used for testing the mean of a population versus a standard, or comparing the means of two populations, with large (n ≥ 30) samples whether you know the population standard deviation or not. It is also used for testing the proportion of some characteristic versus a standard proportion, or comparing the proportions of two populations. Example:Comparing the average engineering salaries of men versus women. Example: Comparing the fraction defectives from 2 production lines. Running a Z test on your data requires five steps: State the null hypothesis and alternate hypothesis. Choose an alpha level. Find the critical value of z in a z table. Calculate the z test statistic (see below). Compare the test statistic to the critical z value and decide if you should support or reject the null hypothesis.
Z-test v T-test v ANOVA
An F-test is used to compare 2 populations' variances. The samples can be any size. It is the basis of ANOVA. Example: Comparing the variability of bolt diameters from two machines. Matched pair test is used to compare the means before and after something is done to the samples. A t-test is often used because the samples are often small. However, a z-test is used when the samples are large. The variable is the difference between the before and after measurements. Example: The average weight of subjects before and after following a diet for 6 weeks
Independent Variable (IV)
An independent variable, sometimes called an experimental or predictor variable, is a variable that is being manipulated in an experiment in order to observe the effect on a dependent variable, sometimes called an outcome variable.
ANOVA
Analysis of Variance - The ANOVA extends the t and the z test which have the problem of only allowing the nominal level variable to have just two categories. ANOVAs are used in three ways: one-way ANOVA, two-way ANOVA, and N-way Multivariate ANOVA.
Nominal (Categorical)
In a nominal level variable, values are grouped into categories that have no meaningful order. For example, gender and political affiliation are nominal level variables. Members in the group are assigned a label in that group and there is no hierarchy. Typical descriptive statistics associated with nominal data are frequencies and percentages.
Interval (Continuous)
In interval measurement the distance between attributes does have meaning. For example, when we measure temperature (in Fahrenheit), the distance from 30-40 is same as distance from 70-80. The interval between values is interpretable.
Ratio (Continuous)
In ratio measurement there is always an absolute zero that is meaningful. This means that you can construct a meaningful fraction (or ratio) with a ratio variable. Weight is a ratio variable. In applied social research most "count" variables are ratio, for example, the number of clients in past six months. Why? Because you can have zero clients and because it is meaningful to say that "...we had twice as many clients in the past six months as we did in the previous six months."
X
Mean
μ (mu)
Mean
Ordinal (Categorical)
Ordinal level variables are nominal level variables with a meaningful order. For example, horse race winners can be assigned labels of first, second, third, fourth, etc. and these labels have an ordered relationship among them (i.e., first is higher than second, second is higher than third, and so on). As with nominal level variables, ordinal level variables are typically described with frequencies and percentages.
Parametric and Nonparametric
Parametric analysis to test group means. Nonparametric analysis to test group medians. In the literal meaning of the terms, a parametric statistical test is one that makes assumptions about the parameters (defining properties) of the population distribution(s) from which one's data are drawn, while a non-parametric test is one that makes no such assumptions. In this strict sense, "non-parametric" is essentially a null category, since virtually all statistical tests assume one thing or another about the properties of the source population(s). For practical purposes, you can think of "parametric" as referring to tests, such as t-tests and the analysis of variance, that assume the underlying source population(s) to be normally distributed; they generally also assume that one's measures derive from an equal-interval scale. And you can think of "non-parametric" as referring to tests that do not make on these particular assumptions.
Standard Deviation
SD Standard deviation measures the spread of a data distribution. The more spread out a data distribution is, the greater its standard deviation. A standard deviation close to 0 indicates that the data points tend to be close to the mean (shown by the dotted line). The further the data points are from the mean, the greater the standard deviation.
σ (sigma)
Standard Deviation
Two Sample T-Test
Tests whether the difference between the means of two independent populations is equal to a target value Ex: Does the mean height of female college students significantly differ from the mean height of male college students?
One Sample T-Test
Tests whether the mean of a single population is equal to a target value. Is the mean height of female college students greater than 5.5 feet? The one sample t-test requires the sample data to be numeric and continuous, as it is based on the normal distribution. Continuous data can take on any value within a range (income, height, weight, etc.).
Degrees of Freedom (df)
The Freedom to Vary First, forget about statistics. Imagine you're a fun-loving person who loves to wear hats. You couldn't care less what a degree of freedom is. You believe that variety is the spice of life. Unfortunately, you have constraints. You have only 7 hats. Yet you want to wear a different hat every day of the week. On the first day, you can wear any of the 7 hats. On the second day, you can choose from the 6 remaining hats, on day 3 you can choose from 5 hats, and so on. When day 6 rolls around, you still have a choice between 2 hats that you haven't worn yet that week. But after you choose your hat for day 6, you have no choice for the hat that you wear on Day 7. You must wear the one remaining hat. You had 7-1 = 6 days of "hat" freedom—in which the hat you wore could vary! That's kind of the idea behind degrees of freedom in statistics. Degrees of freedom are often broadly defined as the number of "observations" (pieces of information) in the data that are free to vary when estimating statistical parameters.
Dependent Variable (DV)
The dependent variable is simply that, a variable that is dependent on an independent variable(s). The DV is not changed or manipulated.
Null Hypothesis
The hypothesis that there is no significant difference between specified populations, any observed difference being due to sampling or experimental error. Null hypothesis testing is more about finding population parameter and trying to see if sample we've collected is taken from that population
Independent T-Test
The independent-samples t-test is used to determine if a difference exists between the means of two independent groups on a continuous dependent variable.
Paired T-Test
The paired sample t-test, sometimes called the dependent sample t-test, is a statistical procedure used to determine whether the mean difference between two sets of observations is zero. In a paired sample t-test, each subject or entity is measured twice, resulting in pairs of observations. Ex: If you measure the weight of male college students before and after each subject takes a weight-loss pill, is the mean weight loss significant enough to conclude that the pill works? The paired sample t-test requires the sample data to be numeric and continuous, as it is based on the normal distribution. Continuous data can take on any value within a range (income, height, weight, etc.). The opposite of continuous data is discrete data, which can only take on a few values (Low, Medium, High, etc.). Occasionally, discrete data can be used to approximate a continuous scale, such as with Likert-type scales.
Power
The probability that the statistical test will correctly reject a false null hypothesis
Chi-square Test of Independence
The test is applied when you have two categorical variables from a single population. It is used to determine whether there is a significant association between the two variables. For example, in an election survey, voters might be classified by gender (male or female) and voting preference (Democrat, Republican, or Independent). We could use a chi-square test for independence to determine whether gender is related to voting preference. When to Use: - The sampling method is simple random sampling. - The variables under study are each categorical. - If sample data are displayed in a contingency table, the expected frequency count for each cell of the table is at least 5.
Type I and Type II Errors
Type I Error - A Type I error is the incorrect rejection of a true null hypothesis. The alpha symbol, α, is usually used to denote a Type I error. Type II Error - A Type II error is the failure to reject a false null hypothesis. The probability of a type II error is denoted by the beta symbol β.
Sampling Error
When you only survey a small sample of the population, uncertainty creeps in to your statistics. If you can only survey a certain percentage of the true population, you can never be 100% sure that your statistics are a complete and accurate representation of the population. This uncertainty is called sampling error and is usually measured by a confidence interval. For example, you might state that your results are at a 90% confidence level. That means if you were to repeat your survey over and over, 90% of the time your would get the same results.
Chi-Square Goodness of Fit
he test is applied when you have one categorical variable from a single population. It is used to determine whether sample data are consistent with a hypothesized distribution. For example, suppose a company printed baseball cards. It claimed that 30% of its cards were rookies; 60%, veterans; and 10%, All-Stars. We could gather a random sample of baseball cards and use a chi-square goodness of fit test to see whether our sample distribution differed significantly from the distribution claimed by the company.