Psychology Statistics
Degrees of Freedom (df)
(n-1)
Effect on Power
-Power is a function of several variables -It is a function of α (the probability of Type I error) -It is a function of the true alternative hypothesis -Itisafunctionofsamplesize
What are the two possible mistakes we can make with our null hypothesis testing?
-We can reject the null hypothesis when it is true (false-positive): Type I Error -We can fail to reject the null hypothesis when it is false (false-negative): Type II Error
A common method used to compare several means while keeping the Type I error rate small is known as what?
Analysis of Variance
One -Way ANOVA
If the analysis of variance uses only one independent variable
SS A + SS E = what
SS T
Dependent Samples of t-test
Two samples are dependent when the observations of one sample are linked (dependent) to the observations in the other sample
Independent Samples of t-test
Two samples are independent when the observations of one sample are independent (does not influence) the observations in the other sample
Treatment Sum of Squares
You replace each score by its respective mean,subtractfromitthe Grand Mean, square the result and add them all up
Interaction Sum of Squares
You take each score from each cell • Replace it by the mean of that cell • Subtract the Grand Mean from each replaced score • Sum them all up
Error Sum of Squares
You take each score, subtract from it the mean of its group, square the result and add them all up. Then add the individual SS of each group.
Total Sum of Squares
You take each variable, subtract from it the Grand Mean, square the result and add them all up
Two-Way ANOVA
a factorial design in which there are two factors (i.e., two independent variables)
Null hypothesis (H0)
a statement about the value of a population parameter *generally states no effects
Sample
a subset of the population of interest, a part of the population
How do you find the average of SS a?
average the SSA, we need to divide the SSA by the degrees of freedom related to the number of groups (number of groups - 1)
Normal Distribution
characterized by its mean μ and standard deviation σ
The distribution of t varies as a function of what?
degrees of freedom
Confidence Interval
form of point estimate+margin of error *form of point estimate = x ̄
Population Mean (μ)
is a parameter estimated from the data
ANOVA
is a statistical technique for testing/comparing differences in the means of several groups
Variance
is the standard deviation squared s^2 = ∑ (x-x ̄ ) ^2 / n-1
Standard Normal Distribution
it has a mean of 0 and a standard deviation of 1 *z= x−μ/σ
The groups that make up the independent variable are known as what?
levels
What happens if you add an extra data point?
more numbers (the bigger n) and bigger the SS
Mean Square
obtain the average deviation,we divide the SS by degrees of freedom
F-statistic
obtained by dividing MS A by MS E *if F is greater than 1 we reject the null of the ANOVA
Hypothesis
statement about a population
Alternative Hypothesis (H1 or Ha)
statement that the population parameter falls in an alternative range of values than those stated in the null hypothesis
variance sum law
states that the variance of the sum or the difference of two independent variables is equal to the sum of their variances
Standard Deviation (s)
take the square root of variance *s = √ ∑( x i − x ̄ ) ^2 /n
Variance (s^2)
the SS divided by the degrees of freedom *in squared units
Null Hypothesis
the absence of evidence is not evidence of absence *it assumes that nothing is happening or nothing special is going on
What happens if you increase α?
the cutoff point moves to the left, decreasing β
Residuals
the distance between each data point and the mean *sum of all residuals = 0
Geometric Mean
the nth root of the product of the dataset *good for numbers that are not independent of each other (eg. percentages)
The sampling distribution mean equals what?
the population μ
Proportion
the ratio relationship between a quantity and the total number of quantities in a given set
Degrees of Freedom
the sample size minus the number of parameters estimated from the data *(n-1)
Arithmetic mean
the sum of all the data points ∑ x divided by the number of data points n *good for independent events (eg. scores in a test)
Population
the total set of units (subjects) of interest in a study
Two Independent Samples
two independent samples, the variance of the difference between them is the sum of the two individual variances
How would you test a hypothesis that you didn't know σ to?
use the sample variance (s2) as an estimate of the population variance σ2
How do you find the average of SS e
we need to divide the SS E by the degrees of freedom related to the number of people in each group (number of people in each group -1 times the number of groups)
Margin of Error Formula
z critical * σ/√n
Sample means can be converted to z-scores
z= M-μ m/σ m
The mean of the sampling distribution of the differences between two independent means is what?
μA − μB
Sampling Distribution Standard Deviation
σ/√n
Sum of Squares (SS)
∑( x i − x ̄ ) ^2 *SS increases with sample size
Mean Squared Deviation
∑( x i − x ̄ ) ^2 /n *SS is no longer dependent on the sample size
How do you calculate the difference between residuals?
∑[(xA − μA) − (xB − μB)]2