Stats/Psych Research Chapter 13, Stats/Psych Research Chapter 14, Stats/Psych Research Chapter 15, Stats/Psych Research Chapter 16
sampling distribution of F
gives all the possible F values along with the p(F) for each value, assuming sampling is random from the population
column variance estimate (MScolumns)
how spread out the data in a sample is
A "mean square" is the same thing as a "variance estimate"
true
A 2x4 factorial experiment has 2 levels of one variable and 4 levels of the other variable
true
A factorial experiment is one in which the effect of two or more factors is assessed in one experiment
true
A significant interaction effect occurs when the effects of one of the variables is not the same at all levels of the other variable
true
As the degrees of freedom increase, the t distribution becomes more like the z distribution.
true
For a two-tailed test, if the absolute value of tobt is greater than or equal to tcrit, reject H0.
true
Generally speaking, the more confidence we have that the interval contains the population mean, the larger is the interval.
true
If N<30, to use the t test, the population of raw scores should be normally distributed.
true
In a correlated groups design, the reasonableness of the null hypothesis is usually testedby assuming the sample set of difference scores is a random sample from a population ofdifference scores μD = 0.
true
In a repeated measures design the difference scores are used in the analysis, not the raw scores.
true
In a two-way ANOVA, we do three F tests
true
In an independent groups design, if the independent variable has a real effect, μ1 ≠ μ2.
true
In the independent groups design there is no matching of subjects and each subject istested only once.
true
In the two-way ANOVA, SStotal is partitioned into SSrows, SSwithin-cells, SScolumns, and SSinteraction
true
SSw for one-way ANOVA is SSwithin-cells for two-way ANOVA are conceptually similar
true
Tabled F values are one-tailed probability levels.
true
The F distribution has no negative values.
true
The F distribution is a family of curves,, each uniquely determined by df
true
The mean of the t distribution equals 0 for all sample sizes.
true
The power of the t test increases with increases in the effect of the independent variable.
true
The t distribution is a family of curves that vary with the degrees of freedom
true
The t test for correlated groups is generally more powerful than the sign test.
true
The t test for correlated groups is just like the t test for single samples except that itanalyzes difference scores.
true
The t test for correlated groups uses both the magnitude and direction of the difference scores.
true
The t test for independent groups assumes homogeneity of variance.
true
The t test for independent groups is a robust test.
true
The total variability of a set of data can be completely partitioned into the between-groups variability and the within-groups variability.
true
Variance ratio is the statistic underlying the F distribution.
true
When df= infinity the t distribution is identical to the z distribution.
true
When regard to the assumption of homogeneity of variance, when using Levin's Test for Equality of Variances it is more useful to fail to reject H0.
true
When there are only two groups, then tobt2=Fobt
true
A significant Fobt value indicates that all the group means are significantly different from each other.
false
Homogeneity of variance means that μ1 = μ2.
false
If Fobt is significant, it is possible to tell which groups differ from which without further analysis.
false
In a two-way ANOVA, there are three possible main effects and one interaction
false
In general tcrit is smaller than zcrit at the same alpha level.
false
In the one-way ANOVA, it is assumed that the within-groups variance estimate (MSwithin) is sensitive to the effect of the independent variable.
false
It is not possible to have a significant interaction effect unless one of the variables also has a significant main effect
false
Planned comparisons are the same thing as post hoc comparisons
false
The assumptions underlying two-way ANOVA and one-way ANOVA are different.
false
The degrees of freedom equal N-2 for the t test for single samples.
false
The sampling distribution of t is different for single samples than for correlated groups.
false
The t test is more sensitive than the z test.
false
The within-cells variance estimate measures treatment effects
false
When several tests are appropriate for analyzing data, it is best to use the least powerful test so as to minimize the probability of making a Type I error.
false
tcrit must always be positive
false
a posteriori comparisons
A comparison that a researcher decides to make after the data have been collected and studied. This is usually done because the results have suggested a new way to approach the data
simple randomized-group design
A randomized group design is one in which subjects are randomly assigned to the different groups meant for the different conditions or values of the independent variables
mean of the sampling distribution of the difference between sample means
As you might expect, the mean of the sampling distribution of the difference between means is: which says that the mean of the distribution of differences between sample means is equal to the difference between population means.
critical value of r
Consult the table for the critical value of v = (n - 2) degrees of freedom, where n = number of paired observations. For example, with n = 28, v = 28 - 2 = 26, and the critical value is 0.374 at a = 0.05 significance level.
Fcrit
Critical Frequency
Qobt
Definition of wont (Entry 2 of 3) transitive verb. : accustom, habituate. intransitive verb. : to have the habit of doing something.
estimated standard error of the difference between sample means
First, take the square of the difference between each data point and the sample mean, finding the sum of those values. Then, divide that sum by the sample size minus one, which is the variance. Finally, take the square root of the variance to get the SD.
interaction degrees of freedom (dfinteraction)
For an interaction between factors, the degrees of freedom is the product of the degrees of freedom for the corresponding main effects.
row degrees of freedom (dfrows)
If you have a table with r rows and c columns, where r > 1 and c>1, the degrees of freedom is (r-1)(c-1). In example #2, r = 2 and c = 2. The degrees of freedom is (2-1)(2-1) = 1
t test for correlated groups
Inference test using Student's t statistic Employed with correlated groups, replicated measures and repeated measures designs.
student's t test for single samples
Inference test using Student's t statistic. Employed with single sample design.
Interaction variance estimate (MSinteraction)
Interaction effects represent the combined effects of factors on the dependent measure. When an interaction effect is present, the impact of one factor depends on the level of the other factor. Part of the power of ANOVA is the ability to estimate and test interaction effects.
within-groups variance estimate (MSwithin)
It refers to variations caused by differences within individual groups (or levels)
size of effect
Magnitude of the real effect of the independent variable on the dependent variable.
null-hypothesis approach
Null hypothesis testing is a formal approach to deciding whether a statistical relationship in a sample reflects a real relationship in the population or is just due to chance. ... If the sample result would be unlikely if the null hypothesis were true, then it is rejected in favour of the alternative hypothesis.
omega squared (w hat 2)
Omega squared is an estimate of the dependent variance accounted for by the independent variable in the population for a fixed effects model. The between-subjects, fixed effects, form of the w2 formula is -- w2 = (SSeffect - (dfeffect)(MSerror)) / MSerror + SStotal.
one-way analysis of variance, independent groups design
One-Way ANOVA ("analysis of variance") compares the means of two or more independent groups in order to determine whether there is statistical evidence that the associated population means are significantly different. One-Way ANOVA is a parametric test. This test is also known as: ... Between Subjects ANOVA.
Qcrit
QCRIT(k, df, α, tails, interp) = the critical value for the studentized q range based on the entries in the tables found in Studentized Range q Table, making a linear interpolation for entries between entries in the table; α is a number between 0 and 1 (default . ... 05)
within-cells sum of squares (SSwithin-cells)
SSwithin is the sum of squares within groups. The formula is: degrees of freedom for each individual group (n-1) * squared standard deviation for each group.
between-groups variance estimate (MSbetween)
S^2between or MSbetween = the estimate of the population variance based on the variation between the means (the between groups population variance estimate). n* is the number of participants in each sample.
column sum of squares (SScolumns)
Squares each value in the column, and calculates the sum of those squared values. That is, if the column contains x1, x2, ... , xn, then sum of squares calculates (x 1 2 + x 2 2+ ... + x n 2).
between-groups sum of squares (SSbetween)
The Mean Sum of Squares between the groups, denoted MSB, is calculated by dividing the Sum of Squares between the groups by the between groupdegrees of freedom. That is, MSB = SS(Between)/(m−1)
sampling distribution of the difference between sample means
The Sampling Distribution of the Difference between Two Meansshows the distribution of means of two samples drawn from the two independent populations, such that the difference between the population means can possibly be evaluated by the difference between the sample means.
standard error of the difference between sample means
The formula for the SD requires a few steps: First, take the square of the difference between each data point and the sample mean, finding the sum of those values. Then, divide that sum by the sample size minus one, which is the variance. Finally, take the square root of the variance to get the SD
column degrees of freedom (dfcolumns)
The number of degrees of freedom for an entire table or set of columns, is df = (r-1) x (c-1), where r is the number of rows, and c the number of columns.
standard deviation of the sampling distribution of the difference between sample means
The standard deviation of the distribution is: ... Sampling distribution of the difference between mean heights. A difference between means of 0 or higher is a difference of 10/4 = 2.5 standard deviations above the mean of -10. The probability of a score 2.5 or more standard deviations above the mean is 0.0062.
row sum of squares (SSrows)
The sum of squares is the sum of the square of variation, where variation is defined as the spread between each individual value and the mean.
f test
The test used to statistically evaluate the differences between the group means in ANOVA
interaction sum of squares (SSinteraction)
This is a very simple calculation that is obtained by taking the sums of squares for between groups (which was calculated from the squared deviations of each cell totalfrom the grand mean estimate) and removing the main effects estimates
within-cells degrees of freedom (dfwithin-cells)
To calculate this, subtract the number of groups from the overall number of individuals.
analysis of variance
Variability around an average Averages that differ beyond expected variability
cohen's d
a measure of effect size that assesses the difference between two means in terms of standard deviation, not standard error
sheffé test
a method for adjusting significance levels in a linear regression analysis to account for multiple comparisons
sampling distribution of t
a probability distribution of the t values that would occur if all possible different samples of a fixed size N were drawn from the null-hypothesis population. It gives (1) all the possible different t values for samples of size N and (2) the probability of getting each value if sampling is random from the null-hypothesis population.
confidence interval
a range of values that probably contains the population value
t test for independent groups
a statistic that relates differences between treatment means to the amount of variability expected between any two samples of data from the same population; used to analyze the results of a two group experiment with independent groups of subjects
tukey HSD test
a widely used post hoc test that determines the differences between means in terms of standard error; the HSD is compared to a critical value; sometimes called the q test
single factor experiment, independent groups design
an experiment where the levels of the factor are varied.
independent groups design
an experimental design in which different groups of participants are exposed to different levels of the independent variable, such that each participant experiences only one level of the independent variable
Eta squared (n2)
an inferential statistic for measuring effect size with an ANOVA
two-way analysis of variance
analysis of variance for a two-way factorial research design
confidence-interval approach
applies the concepts of accuracy, variability, and confidence interval to create a "correct" sample size
a priori comparisons
means comparisons in which specific differences between means, as predicted by the research hypothesis, are analyzed
planned comparisons
means comparisons in which specific differences between means, as predicted by the research hypothesis, are analyzed
degrees of freedom
n-1
interaction effect
occurs when the effect of one factor is not the same at all levels of the other factor
Factorial experiment
one in which the effects of two or more factors or independent variables are assessed in one experiment
homogeneity of variance
one of the conditions that should be in effect in order to perform parametric inferential tests such as a t test or ANOVA; refers to the fact that variability among all the conditions of a study ought to be similar
post hoc comparisons
statistical comparisons made between group means after finding a significant F ratio
row variance estimate (MSrows)
statistical measurement of the spread between numbers in a data set.
within-groups sum of squares (SSwithin)
sum of squares within groups
within-cells variance estimate (MSwithin-cells)
the average of the square variations of each population mean from the mean
main effect
the effect of factor A (averaged over the levels of factor B) and the effect of factor B (averaged over the levels of factor A)
degrees of freedom
the number of scores that are free to vary in calculating that statistic
total variability (SStotal)
the sum of the squares of the deviations of all the observations, yi, from their mean
confidence limits
the values that bound the confidence interval
critical value of t
value of t used to determine whether a null hypothesis is rejected or not
mean of the population of difference scores
which says that the mean of the distribution of differences between sample means is equal to the difference between population means. For example, say that the mean test score of all 12-year-olds in a population is 34 and the meanof 10-year-olds is 25.