ANOVA

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

t test

"ANOVA's younger sibling."

What are the # of levels? Ex: You want to know how gait speed varies based on age and gender. Define IV, DV, and levels.

# of groups within each independent variable DV: Gait speed IV: Age and Gender 3 levels (groups) for age 2 levels (groups) for gender

What are the three assumptions of ANOVA?

1. independence of observations 2. normality 3. homogeneity of variance

What is a 2 way ANOVA?

2 independent variables

What is an ANOVA?

ANOVA is an analysis of variance between groups (or levels of a factor) or within groups (or error) Whereas t-tests compare only two sample distributions, ANOVA is capable of comparing many. We partition the total variance into how group means differ from the grand mean and how individual observations within groups differ from their group's mean

What is the null hypothesis?

All means are equal μ1 = μ2 = μ3 = μ4

What is the F statistic or F ratio?

An F statistic is a value you get when you run an ANOVA test or a regression analysis to find out if the means between two populations are significantly different. It's similar to a T statistic from a T-Test; A-T test will tell you if a single variable is statistically significant and an F test will tell you if a group of variables are jointly significant.

What do main effect and interaction effect have in common?

Assessing effect of the IV

You want to test for sex differences in gait velocity among 3 age groups used earlier. What is the "main effect"?

Average effect for each independent variable. Main effect for sex (combing all ages) "What is the effect of sex on gait speed? Is there a difference in gait speed if you're male versus female? Do men and women have different gaits speeds?" Main effect for age (combining both genders)? "Does gait speed depend on age?"

degrees of freedom

Degrees of freedom of an estimate is the number of independent pieces of information that went into calculating the estimate. It's not quite the same as the number of items in the sample. In order to get the df for the estimate, you have to subtract 1 from the number of items. Another way to look at degrees of freedom is that they are the number of values that are free to vary in a data set. Degrees of freedom becomes a little more complicated in ANOVA tests. Instead of a simple parameter (like finding a mean), ANOVA tests involve comparing known means in sets of data. For example, in a one-way ANOVA you are comparing two means in two cells. The grand mean (the average of the averages) would be: Mean 1 + mean 2 = grand mean. What if you chose mean 1 and you knew the grand mean? You wouldn't have a choice about Mean2, so your degrees of freedom for a two-group ANOVA is 1.

You want to test for sex differences in gait velocity among 3 age groups used earlier. What is the interaction asking?

Does the effect of gender (on gait speed) DEPEND on how old you are? Does the effect of age (on gait speed) DEPEND on sex? or: For gait speed, is there a different trend of increasing age for women versus men?

partial eta squared

Eta squared is the proportion of variance associated with one or more main effects, errors or interactions in ANOVA. In other words, we know there is a difference but how big a difference?

variability

How do things differ from the average? Look at variance, standard deviation, range, and interquartile range

t test versus f test

Hypothesis testing starts with setting up the premises, which is followed by selecting a significance level. Next, we have to choose the test statistic, i.e. t-test or f-test. While t-test is used to compare two related samples, f-test is used to test the equality of two populations. T-test is a univariate hypothesis test, that is applied when standard deviation is not known and the sample size is small. F-test is statistical test, that determines the equality of the variances of the two normal populations.

Covariate

In general terms, covariates are characteristics (excluding the actual treatment) of the participants in an experiment. If you collect data on characteristics before you run an experiment, you could use that data to see how your treatment affects different groups or populations. Or, you could use that data to control for the influence of any covariate. Covariates may affect the outcome in a study. For example, you are running an experiment to see how corn plants tolerate drought. Level of drought is the actual "treatment", but it isn't the only factor that affects how plants perform: size is a known factor that affects tolerance, so you would run plant size as a covariate. A covariate can be an independent variable (i.e. of direct interest) or it can be an unwanted, confounding variable. Adding a covariate to a model can increase the accuracy of your results.

What is the interaction ?

Is there an interaction BETWEEN the independent variables? Analyze all subgroups for significant differences

What is the mean square formula?

Mean Square= Sum of Squares divided by its degrees of freedom In ANOVA, mean squares are used to determine whether factors (treatments) are significant. Represents the variation between the sample means.

mean square

Mean squares are estimates of variance across groups. Mean squares are used in analysis of variance and are calculated as a sum of squares divided by its appropriate degrees of freedom. ... Mean Square Between groups compare the means of groups to the grand mean: .

symbols

N = total sample size; n = subsample size; SS = sum of squares; MS = mean square; df = degree of freedom; a = number of groups (or levels for a categorical variable (or factor); used in calculation of some df; Y = the dependent variable; u (mu) = mean; i = identifying number of an individual within a group (or level); j = identifying number of a group or level

When do you look at post-hoc tests?

ONLY IF overall F value is significant

Observed Power

Observed power (or post-hoc power) is the statistical power of the test you have performed, based on the effect size estimate from your data. Statistical power is the probability of finding a statistical difference from 0 in your test (aka a 'significant effect'), if there is a true difference to be found.

You want to test for sex differences in gait velocity among 3 age groups used earlier. What is the null hypothesis for each main effect? What is the null for the interaction?

One null for each main effect: Age: μ1 = μ2 = μ3 Sex: μ males = μ females One null for the interaction: μ1 males = μ1 females = μ2 males = μ2 females = μ3 males = μ3 females

What is the purpose of a post-hoc test?

Purpose: Discover which pairs of scores are significantly different -Preserves FAMILY-WISE PROTECTION against Type I error

Effect Size

Statistical testing is not enough. With large samples (lots of power) very small effects can be significant...but are they important? Effect sizes help us to decide. The terms "Measure of Association" and "Effect Size" both mean the same thing: quantifying the relationship between two groups. It's more common to talk about Effect Size in the medical field, when you want to know how exposure is related to disease (i.e. What effect does exposure have on disease outcome?). On the other hand, Measure of Association is used in an informal way to mean the same thing (quantifying relationships between groups) in most other fields. The effect size is how large an effect of something is. For example, medication A is better than medication B at treating depression. But how much better is it? A traditional hypothesis test will not give you that answer. Medication B could be ten times better, or it could be slightly better. This variability (twice as much? ten times as much?) is what is called an effect size. Effect Size (Measures of Association) Definition and Use in Research Statistics Definitions > Effect Size / Measurement of Association Before reading this article, you may want to review: What is a p value?. The terms "Measure of Association" and "Effect Size" both mean the same thing: quantifying the relationship between two groups. It's more common to talk about Effect Size in the medical field, when you want to know how exposure is related to disease (i.e. What effect does exposure have on disease outcome?). On the other hand, Measure of Association is used in an informal way to mean the same thing (quantifying relationships between groups) in most other fields. Measure of association could also refer to specific tests for relationships, like: Chi square test of independence, Odds ratio, Proportionate mortality ratio Rate ratio, Risk Ratio (relative risk). Effect Size: Overview The effect size is how large an effect of something is. For example, medication A is better than medication B at treating depression. But how much better is it? A traditional hypothesis test will not give you that answer. Medication B could be ten times better, or it could be slightly better. This variability (twice as much? ten times as much?) is what is called an effect size. Most statistical research includes a p value; it can tell you which treatment, process or other investigation is statistically more sound than the alternative. But while a p value can be a strong indicator of which choice is more effective, it tells you practically nothing else. Effect size can tell you: How large the difference is between groups. The absolute effect (the difference between the average outcomes of two groups). What the standardized effect size is for an outcome. Three common measures in ANOVA are: Omega squared, Epsilon squared, Eta squared.

Estimated Marginal Means

The Estimated Marginal Means in SPSS GLM tell you the mean response for each factor, adjusted for any other variables in the model. If all factors (aka categorical predictors) were manipulated, these factors should be independent. Or at least they will be if you randomly assigned subjects to conditions well.

What does a SIGNIFICANT INTERACTION tell us?

This tells us that any main effects may be MISLEADING or MEANINGLESS

ordinal variable

a qualitative variable that incorporates an order position, or ranking; ordinal scale examples: class rankings, SES, Likert scale

skewness

a statistical measure indicating the symmetry of the distribution around the mean; within +- 2: normal

standard deviation

average deviation from the mean

post hoc comparisons

comparisons explored afterwards

Three basic types of quantitative research designs

experimental (random, equal groups); quasi-experimental (may use random; may have self selection); observational/phenomenological/descriptive

omnibus test

permits analysis of several variables or variable levels at the same time; in ANOVA, can use F test for differences between groups

alpha level

probability required for significance; aka rejection rule; usually .05

variance

sum of squared deviations

paired sample t test

usually based on groups of individuals who experience both conditions of the variable of interest. For instance, one study might examine the effects of Drug A versus Drug B on a single sample of 100 diabetics. Subjects in this sample would receive Drug A one week, and Drug B the next; participants receive both drug/stimulus conditions

What is the Alternative (one way) hypothesis?

μ1≠ μ2≠ μ3≠ μ4 (There is at least ONE difference between groups)

ways to correct for family wise type I error

LSD (least significant difference), Bonferroni, Sidak

statistically significant

Simply put, if you have significant result, it means that your results likely did not happen by chance. If you don't have statistically significant results, you throw your test data out (as it doesn't show anything!); in other words, you can't reject the null hypothesis. In general, if your calculated F value in a test is larger than your F statistic, you can reject the null hypothesis. However, the statistic is only one measure of significance in an F Test. You should also consider the p value. The p value is determined by the F statistic and is the probability your results could have happened by chance.

general linear model

The General Linear Model (GLM) is a useful framework for comparing how several variables affect different continuous variables. In it's simplest form, GLM is described as: Data = Model + Error (Rutherford, 2001, p.3) test statistic (degree to which data depart from the expected, null hypothesis; based on the sums of squares In other words: is the variability between groups greater than that expected on the basis of the within-group variability? ANOVA is a special case of the general linear model

family-wise error rate

The familywise error rate (FWE or FWER) is the probability of a coming to at least one false conclusion in a series of hypothesis tests . In other words, it's the probability of making at least one Type I Error. The term "familywise" error rate comes from family of tests, which is the technical definition for a series of tests on data. The FWER is also called alpha inflation or cumulative Type I error.

type II error

false negative (e.g. very pregnant woman told not pregnant)

type I error

false positive (e.g. male pregnant)

continuous variable

variable that takes on an infinite number of different values presented on a continuum; examples: time, weight, income, age. Only continuous DVs in this class!!

post hoc test

Post-hoc (Latin, meaning "after this") means to analyze the results of your experimental data. They are often based on a familywise error rate; the probability of at least one Type I error in a set (family) of comparisons. The most common post-hoc tests are: Bonferroni Procedure Duncan's new multiple range test (MRT) Dunn's Multiple Comparison Test Fisher's Least Significant Difference (LSD) Holm-Bonferroni Procedure Newman-Keuls Rodger's Method Scheffé's Method Tukey's Test (see also: Studentized Range Distribution) Dunnett's correction Benjamin-Hochberg (BH) procedure

f value in ANOVA

SPSS calculates the F value. The F value in one way ANOVA is a tool to help you answer the question "Is the variance between the means of two populations significantly different?" The F value in the ANOVA test also determines the P value; The P value is the probability of getting a result at least as extreme as the one that was actually observed, given that the null hypothesis is true. The p value is a probability, while the f ratio is a test statistic, calculated as: F value = variance of the group means (Mean Square Between) / mean of the within group variances (Mean Squared Error)

p value

The F statistic must be used in combination with the p value when you are deciding if your overall results are significant. Why? If you have a significant result, it doesn't mean that all your variables are significant. The statistic is just comparing the joint effect of all the variables together. If the p value is less than the alpha level, go to Step 2 (otherwise your results are not significant and you cannot reject the null hypothesis). A common alpha level for tests is 0.05. Study the individual p values to find out which of the individual variables are statistically significant.

total sum of squares

The Total SS tells you how much variation there is in the dependent variable. In statistical data analysis the total sum of squares (TSS or SST) is a quantity that appears as part of a standard way of presenting results of such analyses. It is defined as being the sum, over all observations, of the squared differences of each observation from the overall mean.

Sum of squares

The residual sum of squares is used to help you decide if a statistical model is a good fit for your data. It measures the overall difference between your data and the values predicted by your estimation model (a "residual" is a measure of the distance from a data point to a regression line). The sum of the squared deviations, (X-Xbar)², is also called the sum of squares or more simply SS. SS represents the sum of squared differences from the mean and is an extremely important term in statistics. Variance. The sum of squares gives rise to variance. In statistical data analysis the total sum of squares (TSS or SST) is a quantity that appears as part of a standard way of presenting results of such analyses. It is defined as being the sum, over all observations, of the squared differences of each observation from the overall mean. Can calculate the sums of squares TOTAL, WITHIN, and BETWEEN

sum of squares

The sum of the squared deviations, (X-Xbar)², is also called the sum of squares or more simply SS. SS represents the sum of squared differences from the mean and is an extremely important term in statistics. In a regression analysis, the goal is to determine how well a data series can be fitted to a function which might help to explain how the data series was generated. In the context of ANOVA, this quantity is called the total sum of squares (abbreviated SST) because it relates to the total variance of the observations.

Univariate

Univariate analysis is the simplest form of analyzing data. "Uni" means "one", so in other words your data has only one variable. It doesn't deal with causes or relationships (unlike regression) and it's major purpose is to describe; it takes data, summarizes that data and finds patterns in the data. Some ways you can describe patterns found in univariate data include central tendency (mean, mode and median) and dispersion: range, variance, maximum, minimum, quartiles (including the interquartile range), and standard deviation. You have several options for describing data with univariate data. Click on the link to find out more about each type of graph or chart: Frequency Distribution Tables. Bar Charts. Histograms.

independent sample t test

When making simple, straightforward comparisons of the means of two independent variables with two levels, the independent samples t-test is usually the statistic of choice. Example: Two independent samples of high school seniors (60 boys; 60 girls) to see if there are gender differences on vocab test.

categorical variable

a variable that names categories (whether with words or numerals); examples: hair color, gender. Only categorical IVs in this class!!

pairwise comparisons

comparisons of each possible pair of means; each comparison has its own new null hypothesis

a priori comparisons

comparisons planned beforehand; if hypotheses is truly a priori we do not need to correct for family wise Type I error

psi equation

contrast expressed as an equation; contrast the weight and mean of a level of the IV

orthogonality

helps keep us honest and sane; ensures we do not violate the spirit of a priori contrasts, not redundant

kurtosis

how flat or peaked a normal distribution is; within +- 2: normal

family wise error

is the probability of making one or more false discoveries, or type I errors when performing multiple hypotheses tests.

independent variable (IV)

the factor being manipulated by the experimenter; the thing we think affects other things; can be continuous, ordinal, or categorical, but always categorical in this course; also called factors or effects; each IV has greater than or equal to 2 levels (number depends on what was measured or reported); careful about collapsing levels

dependent variable (DV; Y)

the factor being measured (i.e., the result of interest); the thing we think is affected by other things; always measured; can be continuous, ordinal, or categorical, but only continuous in this course


Ensembles d'études connexes

International Organizations - Week 7

View Set

Unit 9 Vocabulary (allocate-spontaneous)

View Set

course 1 final practice exam attempt 3

View Set

Negative Symptoms of Schizophrenia

View Set

Ch 41 Urinary Elimination Review Questions

View Set