Exam 2 Review (EDP 371)

Ace your homework & exams now with Quizwiz!

Type I error rate accumulated from all the individual tests in an experiment

Expert-Wise Alpha Level

Which of the following best describes the Pearson correlation for these data? ​ X Y 2 5 5 1 3 4 4 2

It is negative.

A researcher is interested in whether students have a preference for being taught statistics by a woman or a man. From a sample of 100 respondents, it was found that 30% of respondents preferred a male teacher. Test the researcher's hypothesis using anαlevel of 0.05.

Observed Male Female Total 30 70 100 Expected Male Female Total 50 50 100 Components of χ 2 Male Female 8 8 χ 2 = 16 versus critical χ 2 of 3.84...

Weighted average variance between two sample variances

Pooled Variance

Degrees of freedom for a repeated measures or matched samples design

N=1

Tests conducted after a significant ANOVA to determine where the difference lies

Post hoc test

When testing for homogeneity of variance (F-max test), a result of 'Fail to Reject the H0 would have what meaning regarding the variances of each group

the variances are equal

What does s^2(sub p) estimate and what makes it a "pooled" estimate?

-pooled sample variance, pool variance of two-sample variances to get single estimate of sigma^2.

T or F: The observed frequencies for a chi-square test are always whole numbers (no fractions or decimals).

True

T or F: The statistical technique for finding the best-fitting straight line for a set of data is called regression, and the resulting straight line is called the regression line.

True

Independent-measures research design

A research design that uses a separate group of participants for each treatment condition (or each population) is called an independent-measures research design or between subject research design.

Independent-measures t-statistic

A value that is computed using the sample data of two groups to help determine significance

Designed to evaluate the mean differences from research studies producing two or more sample means

ANOVA

ANOVA stands for...

Analysis of Variance

Power

Correct decision (Null false, reject null) (1-B)

Mean Square Between divided by Mean Square Within

F-statistic

Factor

In ANOVA, the variable that designates the groups being compared is called a factor. An example is: the treatment group

Type II error

Incorrect decision (Null false, fail to reject the null) (b)

Unrelated groups

also called unpaired groups or independent groups, are groups in which the cases (e.g., participants) in each group are different

1. Which of these correlations is more likely to be found to be statistically significant (if all were based on the same sample size)? a 0.4 b 0.1 c -0.7

c

You are interested in assessing whether there is a relationship between hours spent studying and GPA. What is your null hypothesis? a) µstudy - µGPA = 0 b) ρstudy = ρGPA c) ρ = 0

c

What is a Mean Square?

estimated variance

What is the MSB estimating?

variance between the groups' sample means: how much the groups' sample means vary from each other

An analysis of variance comparing three treatment conditions produces dfwithin = 21. If the samples are all the same size, how many individuals are in each sample?

8

When to use which of the group-comparison test statistics that we've covered

-independent: two groups -one sample t-test: one sample compared to standard value -related samples: repeated (sample population after treatment), matched (matched groups based on scores)

When and how is simple linear regression used?

-interested in more than the direction and magnitude of two variables but the correlation For what kind of variables? (scale) Which variable is Y and which is X; Y is the dependent result and X is the independent input

3 assumptions for the Independent Samples t-test:

1. Assumption of Independence 2. Assumption of Normality 3. Assumption of Homogeneity of Variance

Process to compute the Chi-Square test statistic:

1. Find the difference between f0 and fe for each category. 2. Square the difference. 3. Divide the square difference by fe. 4. Add the values from all of the categories.

use of related-sample t statistics

1. H0: mu0 less than, equal to or greater than 0. H1: mu0 anything other than past. 2. Find difference between the two scores. (D), (D bar is average of D). 3. Select CV # tails, df=n-1, 4. Test statistic: t(sub dbar)=(dbar-muD)/(s(sub dbar)(standard error)) 5. Reject or fail to reject. t(df)=stat, p>0.05

Be able to run a F-max test to check if the homogeneity of variance assumption is met

1. State thesis: H0: (sigma1=sigma2), H1: (sigma 1 does not equal sigma 2) 2. CV df=n-1 k=# of groups 3. F-max: s^2(largest)/s^2(smallest 4. Decision: Fail to reject (HT run good) or reject H0 (sigma^2(sub1)) does not equal (sigma^2(sub2)). IF THE SAMPLE F-MAX VALUE IS GREATER THAN THE CV THE VARIANCES ARE NOT HOMOGENOUS AND REJECT NULL.

A research report concludes that there are significant differences among treatments, with "F(2, 27) = 8.62, p < .01." If the same number of participants was used in all of the treatment conditions, then how many individuals were in each treatment?

10

Parametric test

A parametric statistical test is one that makes assumptions about the distribution of the population parameters, such as it comes from a normal distribution

A hypothesis test produces a t statistic of t = 2.20. If the researcher is using a two-tailed test with α = .05, how large does the sample have to be in order to reject the null hypothesis?

At least n=13

If your samples' F-ratio is statistically significant, what inference will you make?

At least one of the groups differs significantly from the other

If the sample mean difference is 3 points, which of the following sets of data would produce the largest value for Cohen's d?​ ​n = 20 for both samples and a pooled variance of 15 ​Cohen's d is the same for all three of the samples. ​n = 30 for both samples and a pooled variance of 15 ​n = 10 for both samples and a pooled variance of 15

Cohen's d is the same for all three of the samples.

T or F: ​Two separate samples, each with n = 10 scores, will produce an independent-measures t statistic with df = 19.

False

True or False: Repeated-measures designs are particularly well-suited to research questions concerning the difference between two distinct populations (for example, males versus females).

False

When should ANOVA be used?

For comparing two or more independent samples on an interval- or ratio-scaled variable

When is χ 2 goodness-of-fit used?

For what kind of variable(s)? (scale) among nominal or ordinal statistics

When do we use Pearson's correlation coefficient?

For what kind of variables? (scale): -both variables are interval or ratio For what kind of relationship between variables? -linear What type of relationships can occur if there is a correlation? (a third variable affecting the two variables but no direct relationship between X & Y, a chain of causal relationships such as X -> M -> Y, or each variable causing the other such as X -> Y or Y-> X) positive, negative, no effect correlation does not imply causation

Type I error

Incorrect decision (Null true, reject null) (alpha)

If an analysis of variance is used for the following data, what would be the effect of changing the value of SS2 to 100? ​ Sample Data M1 = 15 M2 = 25 SS1 = 90 SS2 = 70 ​Decrease SSwithin and increase the size of the F-ratio. ​Increase SSwithin and decrease the size of the F-ratio. ​Increase SSwithin and increase the size of the F-ratio. ​Decrease SSwithin and decrease the size of the F-ratio.

Increase SSwithin and decrease the size of the F-ratio.

In ANOVA, what does MSW provide you with an estimate of?

Individual differences variance = σ 2

How is MSB impacted by a treatment effect?

MSB is composed of an estimate of MSw and some of the function of the treatment effect, larger when there is a treatment effect

Test if the distribution for the observed data differs or not from another stated distribution.

No-Difference Chi-Squared Test

Test if the observed data shows a preference in at least one of the categories of the nominal variable.

No-Preference Chi-Squared Test

The distance between the Y value in the data and the Y value predicted from the regression equation is known as the residual. What is the value for the sum of the squared residuals? ​SSresidual = r2(SSX) ​SSresidual = (1 - r2)(SSX) ​SSresidual = r2(SSY) ​SSresidual = (1 - r2)(SSY)

SSresidual = (1 - r2)(SSY)

Difference between a statistic and a parameter due to random, unsystematic factors

Sampling Error

What happens to the critical value for a chi-square test if the size of the sample is increased?

The critical value depends on the number of categories, not the sample size.

Margin of Error

The margin of error is the range of values below and above the sample statistic in a confidence interval.

What represents k in an ANOVA

The number of groups

With α = .05 and df = 8, the critical values for a two-tailed t test are t = ±2.306. Assuming all other factors are held constant, if the df value were increased to df = 20, what would happen to the critical values for t?

They would decrease (move closer to zero).

T or F: The value r2 is called the coefficient of determination because it measures the proportion of variability in one variable that can be determined from the relationship with the other variable.

True

T or F: Tukey's HSD test allows one to compute a single value that determines the minimum difference between treatment means that is necessary for significance.

True

The smaller the sample size, the less likely you will find that at least one of your three groups differs significantly from the others.

True

What is meant by "% confidence"?

X% of our interval estimates will contain mu.

A researcher measures IQ and weight for a group of college students. What kind of correlation is likely to be obtained for these two variables?

a correlation near zero

grand mean is

add all x/N (total number of sets)

SSB (usually)

adding (n(sample size)(xbar-GM)^2 n should be sample and GM should be sample throughout

In general, what is the effect of an increase in the variance for the sample of difference scores?

an increase in the standard error and a decrease in the value of t

Pairwise comparisons

any process of comparing entities in pairs to judge which of each entity is preferred or whether or not the two entities are identical

What is the MSW estimating?

average estimated variance of scores

Calculation and interpretation of slope and intercept

b=slope=SP/SSx a=y-intercept=find in graph

What is meant by Xbar1 − Xbar2 is an unbiased estimate of µ1 − µ 2?

because we don't know the population variance

SSB for Scheffe's

comparing each group (x1^2/n + x2^2/n)-((x1+x2)^2)/(2)(n)

ANOVA logic/theory

comparing the means of more than two groups

One sample of n = 8 scores has a variance of s2 = 6 and a second sample of n = 8 scores has s2 = 10.If the pooled variance is computed for these two samples, then the value obtained will be ____.

exactly half way between 6 and 10

What assumption do we make that allows us to claim the because σ (sub(x bar 1)-(x bar 2)) = σ^2 (sub 1) n1 + σ^2(sub 2) n2, we know that σ (sub(x bar 1)-(x bar 2))= σ^2(1/n1 +1/n2).

homogenity of variances, we assume the variance is the same

How does % confidence impact the width of the confidence interval?

increase % confidence, increase width

How does sample size impact the width of the confidence interval?

increase n, decrease width

How does σ 2 impact the width of the confidence interval?

increase sigma^2, increase width

Why not use multiple independent samples t-tests?

it could inflate the type 1 error rate for the set of comparisons so the initial alpha-level set may not equal the actual alpha level set, but with post-hoc the initial=actual

Assuming that there is a 5-point difference between the two sample means, which set of sample characteristics is most likely to produce a significant value for the independent-measures t statistic?​ ​small sample sizes and large sample variances ​small sample sizes and small sample variances ​large sample sizes and small sample variances ​large sample sizes and large sample variances

large sample sizes and small variances

The less the variability between groups' sample means, the ______ likely that there's a treatment effect.

less

A college professor reports that students who finish exams early tend to get better grades than students who hold on to exams until the last possible moment. The correlation between exam score and amount of time spent on the exam is an example of a ____.

negative correlation

Calculation and interpretation of the residual error variance and standard error of estimate and their relationship to Pearson's correlation

residual error variance: S^2(sub Y.X)=SSERROR (adding all (Y-Y')^2))/df (n-2): the variance of the scores from the expected; higher number is lower r standard error of estimate: S(subY.X)=sqrt (SSERROR/df); higher number is lower r

Which of the following is a basic assumption for a chi-square hypothesis test?​ ​The observations must be independent. ​The sample size is less than 30. ​The population distribution(s) must be normal. ​The scores must come from an interval or ratio scale.

the observations must be independent

What is meant by the value of Cohen's d? What variables affect it?

the practical significance or effect size; the differences between the means and the standard deviation (s (subp)) (pooled SD)

What is evaluated by the chi-square test for goodness of fit? the mean differences between two or more treatments ​the relationship between two variables ​the shape or proportions for a population distribution ​the significance of a regression estimate

the shape or proportions for a population distribution

What is meant by "individual differences variability"

the variability of the scores within a sample

A chi-square test for goodness of fit has df = 2. How many categories were used to classify the individuals in the sample?

three

A researcher obtains t = 2.35 for a repeated-measures study using a sample of n = 8 participants. Based on this t value, what is the correct decision for a two-tailed test?

​Fail to reject the null hypothesis with either α = .05 or α = .01.

Which of the following describes the function of a confidence interval? ​It uses a population mean to predict a sample mean. ​It uses the sample mean to determine a level of confidence. ​It uses a sample mean to estimate the corresponding population mean. ​It uses a level of confidence to estimate a sample mean.

​It uses a sample mean to estimate the corresponding population mean.

If other factors are held constant, what is the effect of increasing the sample size?

​It will decrease the estimated standard error and increase the likelihood of rejecting H0.

In an analysis of variance, which value is determined by the size of the sample variances for each treatment condition?​ ​dfbetween SSbetween​ ​SSwithin ​dfwithin

​SSwithin

Two samples, each with n = 6 subjects, produce a pooled variance of 20.Based on this information, what is the estimated standard error for the sample mean difference?

​The square root of (20/6 + 20/6)

Which of the following is the standard error of estimate for a linear regression equation?​ ​the square root of (SSregression/(n - 2)) ​the square root of (SSresidual/(n - 3)) ​the square root of (SSregression/(n - 3)) ​the square root of (SSresidual/(n - 2)

​the square root of (SSresidual/(n - 2))

The percentage of variance accounted for by the treatment effect is usually known as _____ in published reports of ANOVA results.​ µ2​ ​σ2 ​α2 ​η2

​η2

A set of X and Y scores has a regression equation with a slope of b = 4. If the mean for the X values is MX = 2 and the mean for the Y values is MY = 6, what is the Y-intercept value for the regression equation?

-2

How is MSB affected by individual differences variability?

-can either be the variability within the groups -treatment effect

benefits of repeated measures

-more powerful than independent samples t-test. By controlling for individual differences, we can reduce the "error" in our test. -useful when investigating a rare phenomenon-like a disease where it's hard to find two large samples -good for investigating across time

On average, what value is expected for the t statistic when the null hypothesis is true? 0​ ​1.96 ​1 ​t > 1.96

0

For the linear equation Y = 2X - 3, which of the following points will not be on the line?

0, 3​

A set of n = 25 pairs of scores (X and Y values) has a Pearson correlation of r = 0.80. How much of the variance for the Y scores is predicted by the relationship with X?

0.64 or 64%

An independent-measures study has one sample with n = 10 and a second sample with n = 15 to compare two experimental treatments.What is the df value for the t statistic for this study?

23

An analysis of variances produces dftotal = 29 and dfwithin = 27. For this analysis, how many treatment conditions are being compared?

3

Interval estimate

Interval estimation is the use of sample data to calculate an interval of possible (or probable) values of an unknown population parameter, in contrast to point estimation, which is a single number.

If there is a treatment effect, which Mean Square is expected to be larger, MSB or MSW?

MSB

In an analysis of variance, which value is determined by the size of the sample mean differences? ​dfwithin ​dfbetween ​SSwithin SSbetween​

SSbetween​

A post-hoc test that uses the F-ratio to test significance and is generally more conservative

Scheffe's

Conservative test

The more difficult it is to find a difference, the more conservative is the test

T or F: If the 90% confidence interval for µ is from 40 to 50, then the sample mean is M = 45.

True

What do we use Confidence Intervals for?

We use a statistic to estimate the corresponding parameter. Point estimate: sample mean provides point estimate of population mean Interval estimate: more info than a point estimate and incorporates sampling error.

How is MSW affected by individual differences variability?

calculated using the variability of scores within

​An independent-measures study produces sample means of M1 = 35 and M2 = 31 and a pooled variance of 25.For this study, Cohen's d = ____.

d=4/5

the number of scores in a sample that are independent and free to vary. Because the sample mean places a restriction on the value of one score in the sample, there are n - 1 degrees of freedom for a sample with n scores

degrees of freedom

Degrees of freedom for Chi-Square Goodness-of-Fit test

df = C-1 where C is the number (#) of categories

The difference between scores for a participant at different time points. Typically, obtained by subtracting the first score (before treatment) from the second score (after treatment) for each person or matched sample.

difference scores

Order effect

differences in research participants' responses that result from the order (e.g., first, second, third) in which the experimental materials are presented to them. For example, participants may become more familiar with testi

5. The assumption of homoscedasticity means that the variance of the predicted Ys can be assumed equal for each possible X value.

false

The term observed frequencies refers to the frequencies ____. that are hypothesized for the population being examined ​found in the sample data ​found in the population being examined ​computed from the null hypothesis

found in the sample data

S(subY.X)

is the standard error of estimate

How is MSW impacted by a treatment effect?

it will be smaller or less than the MSB

Purpose of effect size

measure practical significance

If other factors are held constant, which set of sample characteristics is most likely to produce a significant t statistic? ​n = 25 with s2 = 400 n = 25 with s2 = 100​ ​n = 100 with s2 = 100 ​n = 100 with s2 = 400

n = 100 with s2 = 100

A researcher reports t(12) = 2.86, p < .05 for a repeated-measures research study. How many individuals participated in the study?

n=13

Way to properly report results of a t-statistic test

t(df)=<test statistic>, p < or > , d=<Cohen's d value>

What is meant by prediction error?

the difference between what we predict Y to be and what our observed value of Y is Y-Y'

Why might we need to do pairwise comparisons and when?

to determine which specific groups are different from which other groups by comparing all possible pairs of groups for whom the initial ANOVA was conducted run AFTER ANOVA has indicated AT LEAST ONE group is different than the other

A matched-subjects study and an independent-measures study both produced a t statistic with df = 16. How many individuals participated in each study?

​34 for matched-subjects and 18 for independent-measures

What is meant by µ1 − µ 2 and by Xbar1 − Xbar2 ?

the difference between two means

repeated measures

when the same sample is used twice

How is the F-ratio impacted by a treatment effect?

will tend to be larger than 1

Each individual in one sample is matched with an individual in the other sample. The matching is done so that the two variables are equivalent (or nearly equivalent) with respect to a specific variable that the researcher would like to control.

Matched-subjects study

How do we interpret the value of Pearson's correlation?

Sign: positive correlation, negative correlation Magnitude: close to 1 is strong, close to 0 weak Limits: correlation does not imply causation

benefits of match-design studies

-more powerful than independent samples t- test

Effect size in ANOVA

Eta-squared

T or F: The within-treatments variance provides a measure of the variability inside each treatment condition.

True

Liberal test

The easier it is to find a difference, the more liberal is the test

T or F: A regression equation has a slope of b = 3, if MX = 3 and MY = 10. The Y-intercept for the equation is 1.

True

T or F: ​One characteristic of the chi-square tests is that they can be used when the data are measured on a nominal scale.

True

For an independent-measures experiment comparing two treatment conditions with a sample of n = 10 in each treatment, what are the correct df values for the F-ratio?

1, 18

How do we calculate confidence intervals?

1. t stat vs z stat: t stat (when we don't know sigma), z stat (when we know sigma) 2. Calculate Calculate σ(sub bar) or S(sub bar)(where necessary). (standard error) either sigma/sqrt(n) or s/sqrt(n) 3. Find appropriate Z versus critical t. (compare the plotted point)

T or F: A research report states "t(15) = 2.31, p < .05." For this study, the sample had n = 16 scores.

True

The results of a hypothesis test are reported as follows: t(15) = 2.70, p < .05. Based on this report, how many individuals were in the sample?

16

An independent-measures study uses two samples, each with n = 10, to compare two treatment conditions. What is the df value for the t statistic for this study?

18

A set of X and Y scores has SSX = 20, SSY = 10, and SP = 40. What is the slope for the regression equation?

2.0

Primary advantage of a repeated measures design

Reduces or eliminates problems caused by individual differences

The dependent variable is measured two or more times for each individual in a single sample. The same group of subjects is used in all treatment conditions.

Repeated-measures design / within-subject design

A measure of effect size when only the sample standard deviation is known.

estimated Cohen's d

The null and alternative hypotheses for independent sample t-test:

H0 : µ1 = µ2 or H0: µ1 − µ2 = 0 and H1: µ1 ≠ µ2 or H1: µ1 − µ2 ≠ 0

Two populations being compared have the same variance under the assumption of the Additive Treatment Effect

Homogeneity of Variance

Confidence Level

The percentage of ALL confidence intervals that would contain the true population parameter .

For a two-tailed hypothesis test evaluating a Pearson correlation, what is stated by the null hypothesis?

The population correlation is zero.

What distinguishes Tukey's HSD from Scheffé's test and how to conduct each of the tests?

Tukey: -more conservative -samples need to be same size -HSD=q(go to table using k & dfW) (sqrt ((MSw)/n (sample size of 1 group)) -compare to value of difference between each pair of sample means, any that is larger than HSD means it is statistically different Scheffe: -more liberal -use f-ratio -Fcv=same as ANOVA -F=MSB/MSW, but the new SSB is from two groups being compared comparing each group (x1^2/n + x2^2/n)-((x1+x2)^2)/(2)(n) -compare every score if F score greater than FCV is it statistically different

A repeated-measures design has the maximum advantage over an independent-measures design when ____. very few subjects are available and individual differences are large ​many subjects are available and individual differences are small ​very few subjects are available and individual differences are small ​many subjects are available and individual differences are large

very few subjects are available and individual differences are large

Amount of Cohen's d that represents a small effect size

~0.2

Amount of Cohen's d that represents a medium effect size

~0.5

Amount of Cohen's d that represents a large effect size

~0.8

For the linear equation Y = 2X + 4, if X increases by 1 point, how much will Y increase?

​2 points

Four hypothesis testing steps and relevant parts of the fourths step

Formulate hypotheses (H0 and H1) in English and using parameters INDEPENDENT HYPOTHESIS (H0: mu1=mu2 or H0: mu1-mu2=0; H1: mu1 does not equal mu2 or H1: mu1-mu2 does not equal 0) RELATED SAMPLES (mu0 greater than, less than or equal to zero) Select critical test statistic (t crit) Calculate sample relevant sample test statistic (relevant to design), know when to use each of the following: a) One-sample t: (xbar-mu)/s(sub x bar)(standard error); one group compared to a standard value b) Independent samples t: two groups ((xbar1-xbar2)-(mu1-mu2))/(standard differences of the mean (s(sub x bar1- xbar 2)) c) Related samples t: (xbar-mu)/s(sub x bar)(standard error); one group compared to themselves after a treatment State conclusions (3 parts) including each of the following: a) Decision about H0 (reject or fail to reject) b) In English (including "significant" or "not significant") c) In statistical terms (including test statistics (df where necessary), p-value and estimate of Cohen's d - where appropriate) t(30)+4.23, p<0.05; d=1.58

How is χ 2 used?

Four hypothesis testing steps including 1. H0: No subject is preferred over the other. H1: At least one subject is preferred over the other. OR H0: There is no difference... H1: The distribution differs... 2. df=c-1 (number of categories-1) 3. chi^2=adding ((fo-fE)^2)/fE 4. Reject or fail to reject chi^2(...df, n=...)=...,p=... Calculation of expected frequencies (sample size times percentage) • When is the no-preference scenario used? Equal expected value • When is the no-difference scenario used? Different expected values for each score

The assumptions

Linearity: relationship between X and Y is linear Homoscedasticity versus Heteroscedasticity: homo: each distribution of observed Y's has equal variances for each X value; hetero: different variances for each x value Normality of conditional distributions: for each X value, the population distribution of observed Y's are normally distributed.

ANOVA how to conduct omnibus group comparison hypothesis testing with two or more groups:

Statement of H0 and H1: H0: muD=muT=muD&T H1: At least one population mean is different from the others What explains variability in sample means? within is individual differences, between is variability of scores within each sample, possible a treatment effect. -When there is no treatment effect: variability between is the same as variability within -When there is a treatment effect: variability between differs from variability within. Selection of critical F-ratio (characteristics of F-ratio distribution): mean of scores between and mean of scores within. -right-skewed (left of graph) -never negative. -peaks at 1 because null assumes there is no treatment effect so MSb=MSW -different F-ratio distributions for each df -MS are estimates, so it is possible to make type 1 errors Fill in various cells in ANOVA source table to obtain F-ratio (calculation of SS, df, MS for Between and Within groups and then F-ratio) The homogeneity of variance assumption:the variances of all groups are the same -Other assumptions: normally distributed outcome variable, independent observations

How to calculate Pearson's correlation

(adding (x-xbar)(y-ybar))/sqrt ((adding (x-xbar)^2)(adding (y-ybar)^2))

What is the value of SP for the following set of data? ​ X Y 1 4 2 4 9 1

-15

What are the two fundamental assumptions: what do they mean, why do we make them?

-An additive treatment effect (the same effect on each control value an added constantly to create the treatment group) / homogeneous variances (assuming that even though mu1 does not equal m2, sigma1=sigma2, and thus sigma^2(sub1)=sigma^2(sub 2) -H0 is true, the null hypothesis is true: mu1=mu2 or mu1-mu1=0 there is no difference between the treatment and control means

How to conduct the four steps of hypothesis testing (one- and twotailed) for the correlation - testing it against a value of zero.

1. H0: p=0 H1: p does not equal 0 2. Critical Value: df=n-2, n=# of people in one group; Hypothesis testing whether there is a relationship would create a two-tailed test, Hypothesis being positive or negative is one-tailed 3. r=Sp/sqrt ((SSX)(SSy)) 4. Reject or Fail? Sig? r= , n= , p,0.05

On average, what value is expected for the F-ratio if the null hypothesis is true? N - k ​k - 1 ​0 ​1.00

1.00

A sample of n = 25 scores has a mean of M = 40 and a variance of s2 = 100. What is the estimated standard error for the sample mean?

2

A sample of n = 4 scores has SS = 48. What is the estimated standard error for the sample mean?

2

A researcher conducts a hypothesis test using a sample of n = 40 from an unknown population. What is the df value for the t statistic?

39

The following table shows the results of an analysis of variance comparing four treatment conditions with a sample of n = 5 participants in each treatment. Note that several values are missing in the table. What is the missing value for the F-ratio? ​ Source SS df MS Between 30 xx xx F = xx Within xx xx xx Total 62 xx

5

The following table shows the results of an analysis of variance comparing three treatment conditions with a sample of n = 10 participants in each treatment. Note that several values are missing in the table. What is the missing value for SStotal? ​ Source SS df MS Between 20 xx xx F = xx Within xx xx 2 Total xx xx

74

Nonparametric test

A non-parametric statistical test is a test whose model does NOT specify distributional conditions about the parameters of the population from which the sample was drawn

Hypothesis stating there is at least one mean difference in the population

Alternative Hypothesis of ANOVA

Disadvantages of a repeated measures design

As the same participant is used multiple times, disadvantages include: outside factors that change over time, participants' health or mood change, participants gain experience, order effect

1. Observations between each treatment must be independent 2. The population distribution of difference scores (D values) must be normal.

Assumptions of related-samples t test

df for all features

CONFIDENCE INTERVALS/T CRIT df=n-1 ONE SAMPLE=n-1 INDEPENDENT SAMPLES (TWO GROUPS)=n1+n2-2 F-MAX: df=n-1(just one of samples) RELATED SAMPLES=n-1 ANOVA dfW=N-k (N=4 groups of 10>40) dfB=k-1 CORRELATION COEFFICIENT df=n-2 REGRESSION df=n-2

Uses sample data to test hypotheses about the shape or proportions of a population distribution given a nominal variable. The test determines how well the obtained sample proportions fit the population proportions specified by the null hypothesis.

Chi-square test for goodness of fit

An interval, or range of values centered around a sample statistic. The logic behind a confidence interval is that a sample statistic, such as a sample mean, should be relatively near to the corresponding population parameter.

Confidence interval

an estimate of the real standard error σ_M when the value of σ is unknown. It is computed from the sample variance or sample standard deviation and provides an estimate of the standard distance between a sample mean M and the population mean μ

Estimated standard error (sM)

The frequency value that is predicted from the proportions in the null hypothesis and the sample size (n). The expected frequencies define an ideal, hypothetical sample distribution that would be obtained if the sample proportions were in perfect agreement.

Expected frequency

A statistical test to evaluate if the variances from two populations are the same

F-Max Test

T or F: A negative correlation means that decreases in the X variable tend to be accompanied by decreases in the Y variable.

False

T or F: Effect size for analysis of variance is measured by η2 which equals SSbetween divided by SSwithin.

False

T or F: For a fixed level of significance, the critical value for chi-square increases as the size of the sample increases.

False

T or F: For a repeated-measures design, df = n1 + n2 - 2 for the t statistic.

False

T or F: For an ANOVA comparing three treatment conditions, rejecting the null hypothesis is equivalent to concluding that all three treatment means are different.

False

T or F: If an analysis of variance produces SSbetween = 20 and SSwithin = 40, then η2 = 0.50.

False

T or F: The line produced by the equation Y = 4X - 5 crosses the vertical axis at Y = 5.

False

​T or F: If an independent-measures t statistic has df = 20, then there were a total of 18 individuals participating in the research study.

False

Counterbalancing

In order to deal with order effect, participants are randomly divided into two groups, with one group receiving treatment 1 followed by treatment 2, and the other group receiving treatment 2 followed by treatment 1

Point estimate

In simple terms, any statistic can be a point estimate. A statistic is an estimator of some parameter in a population. Example of point estimates are sample means, sample standard deviation (SD), and sample variance.

What is the logic underlying and process undertaken to conduct two group comparison hypothesis testing?

It is often unusual in real life to know a population mean. So we often have to estimate both the treatment group mean mu(t) and the control group mean mu(c)

Statistic representing the variance between the sample means

Mean Square Between

Statistic representing the variance scores of multiple groups

Mean Square Within

Why should we use ANOVA instead of doing multiple ttests?

Multiple t-tests will result in an inflated experiment-wise type I error rate

The number of individuals from the sample who are classified in a particular category. Each individual is counted in one and only one category.

Observed frequency

The average difference expected between two sample means from a sampling distribution of difference

Sample Standard Error of the Difference

Level

The individual conditions or values that make up a factor are called the levels of the factor. An example is: diet, training, and diet & training

Meaning of Cohen's d

The number of standard deviations (SDs) represented by the difference in sample means

What is meant by the p-value?

The probability of getting your test statistic value or more extreme if the null hypothesis is true

A researcher obtains a negative value for a chi-square statistic. What can you conclude because the value is negative? ​There are large differences between the observed and expected frequencies. ​The researcher made a mistake; the value of the chi-square cannot be negative. ​The observed frequencies are consistently larger than the expected frequencies. ​The expected frequencies are consistently larger than the observed frequencies.

The researcher made a mistake; the value of the chi-square cannot be negative.

What value is estimated with a confidence interval using the t statistic?

The value for an unknown population mean

Which of the following accurately describes the observed frequencies for a chi-square test? They are always whole numbers. ​They are always the same value. ​They can contain both positive and negative values. ​They can contain fractions or decimal values.

They are always whole numbers.

Which of the following accurately describes the expected frequencies for a chi-square test?​ They can contain fractions or decimal values. ​They are positive whole numbers. ​They are always whole numbers. They can contain both positive and negative values.​

They can contain fractions or decimal values.

T or F: A large value for the chi-square statistic indicates a large discrepancy between the sample data and the hypothesis.

True

T or F: A repeated-measures study and an independent-measures study both produce t statistics with df = 20. The independent-measures study used more participants.

True

T or F: A sample of n = 4 scores with SS = 48 has a variance of 16 and an estimated standard error of 2.

True

T or F: Although the original data for a repeated-measures study consist of two scores for each participant, the calculation of the mean and variance are done with only one score for each participant.

True

T or F: An ANOVA is used to determine whether a significant difference exists between any of the treatments, and post tests are used to determine exactly which treatment means are significantly different.

True

T or F: An F-ratio near 1.00 is an indication that the null hypothesis is likely to be true.

True

T or F: For a chi-square test, the expected frequencies are calculated values that are intended to produce a sample that is representative of the null hypothesis.

True

T or F: For a hypothesis test using a t statistic, the boundaries for the critical region will change if the sample size is changed.

True

T or F: For a one tailed test evaluating a treatment that is supposed to decrease scores, a researcher obtains t(8) = -1.90. For α = .05, the correct decision is to reject the null hypothesis.

True

T or F: For a one-tailed test with α = .05 and a sample of n = 9, the critical value for the t statistic is t = 1.860.

True

T or F: For a repeated-measures study, a small variance for the difference scores indicates that the treatment effect is consistent across participants.

True

T or F: If a hypothesis test using a sample of n = 16 scores produces a t statistic of t = 2.15, then the correct decision is to reject the null hypothesis for a two-tailed test with a = .05.

True

T or F: If all of the participants in a repeated-measures study show roughly the same 10-point difference between treatments, then the data are likely to produce a significant value for the t statistic.

True

T or F: If other factors are held constant, as the sample size increases, the estimated standard error decreases.

True

T or F: If other factors are held constant, the bigger the sample is, the greater the likelihood of rejecting the null hypothesis.

True

T or F: If you measured hearing acuity and age for a group of people who are 50 to 90 years old, you should obtain a negative correlation between the two variables.

True

T or F: In general, a post hoc test enables you to go back through the data and compare the individual treatments two at a time, a process known as making pairwise comparisons.

True

T or F: It is possible for a regression equation to have none of the actual (observed) data points located on the regression line.

True

T or F: One advantage of a repeated-measures design is that it typically requires fewer participants than an independent-measures design.

True

T or F: SSbetween measures the size of the mean differences from one treatment to another.

True

T or F: The Pearson correlation measures the degree and the direction of the linear relationship between two variables.

True

T or F: The larger the differences among the sample means, the larger the numerator of the F-ratio will be.

True

T or F: The value of SSError measures the total squared distance between the actual Y values and the Y values predicted by the regression equation.

True

T or F: Two samples are selected from a population and a treatment is administered to the samples. If both samples have the same mean and the same variance, you are more likely to find a significant treatment effect with a sample of n = 100 than with a sample of n = 4.

True

T or F: ​One method for correcting the bias in the standard error is to "pool" the two sample variances using a procedure that allows the bigger sample to carry more weight in determining the final value of the variance.

True

A test that computes a minimum difference to test significance

Tukey's HSD Test

SP

adding (x-xbar)(y-ybar) for every value

How does σ subx bar (standard error) impact the width of a confidence interval?

increase standard error, increase width

How to design matched-pair studies. In other words ensure that you match the pairs on a variable(s) that is related to the outcome of interest (the dependent variable).

match scores that were similar

A researcher reports an independent-measures t statistic with df = 30.If the two samples are the same size (n1 = n2), then how many individuals are in each sample?

n=16

o Coefficient of determination - how to computer and how does it mean.

proportion of variability in Y that can be explained by Y's relationship with X. (r^2)

A researcher plans to conduct a research study comparing two treatment conditions with a total of 20 scores in each treatment. Which of the following designs would require only 20 participants? between-measures design ​independent-measures design ​repeated-measures design ​matched-subjects design

repeated-measures design

matched-pairs design

researcher uses two samples that are not independent but significantly chosen to be matched.

The complete set of t values computed for every possible random sample for a specific sample size (n) or a specific degrees of freedom (df). The t distribution approaches the shape of a normal distribution as sample size increases.

t distribution

The values in the sample must consist of independent observations, and the population sampled must be normal.

t test assumptions

used to test hypotheses about an unknown population mean, μ, when the value of σ is unknown. The formula for the t statistic has the same structure as the z-score formula, except that the t statistic uses the estimated standard error in the denominator

t-statistic

What does it mean if your F-ratio is negative?

that you have a calculation error.

. If the true correlation between two variables, X and Y, is positive, then the true slope of the regression line predicting Y using X must also be positive.

true

The goodness of fit χ 2 test is a test to determine whether there is an association between two nominal variables

true

The stronger the correlation between a pair of variables, the smaller will be the standard error of estimate

true

When there's no treatment effect, MSB and MSW both provide estimates of individual differences variability, σ 2 .

true

​Which of the following sets of correlations is correctly ordered from the highest to the lowest degree of relationship? +0.83, +0.10, 0.03, 0.91 ​+0.83, +0.10, 0.91, 0.03 ​0.91, +0.83, 0.03, 0.10 ​0.91, +0.83, +0.10, 0.03

​0.91, +0.83, +0.10, 0.03

Which set of sample characteristics is most likely to produce a significant value for the independent-measures t statistic? A small mean difference and small sample variances ​A large mean difference and small sample variances ​A small mean difference and large sample variances ​A large mean difference and large sample variances

​A large mean difference and small sample variances

If an analysis of variance is used for the following data, what would be the effect of changing the value of M1 to 20? ​ Sample Data M1 = 15 M2 = 25 SS1 = 90 SS2 = 70 ​Increase SSbetween and increase the size of the F-ratio. ​Decrease SSbetween and increase the size of the F-ratio. ​Increase SSbetween and decrease the size of the F-ratio. ​Decrease SSbetween and decrease the size of the F-ratio.

​Decrease SSbetween and decrease the size of the F-ratio.

If an analysis of variance is used for the following data, what would be the effect of changing the value of SS1 to 50? ​ Sample Data M1 = 10 M2 = 15 SS1 = 90 SS2 = 70​ ​Decrease SSwithin and increase the size of the F-ratio. Increase SSwithin and increase the size of the F-ratio.​ ​Increase SSwithin and decrease the size of the F-ratio. ​Decrease SSwithin and decrease the size of the F-ratio.

​Decrease SSwithin and increase the size of the F-ratio.

​For the independent-measures t statistic, what is the effect of increasing the sample variances?

​Decrease the likelihood of rejecting H0 and decrease measures of effect size.

What is indicated by a positive value for a correlation?

​Increases in X tend to be accompanied by increases in Y.

When n is small (less than 30), how does the shape of the t distribution compare to the normal distribution?

​It is flatter and more spread out than the normal distribution.

A sample has a mean of M = 39.5 and a standard deviation of s = 4.3, and produces a t statistic of t = 2.14. For a two-tailed hypothesis test with α = .05, what is the correct statistical decision for this sample?

​It is impossible to make a decision about H0 without more information.

Which of the following accurately describes the chi-square test for goodness of fit?​ ​It is similar to an independent-measures t test because it uses separate samples to evaluate the difference between separate populations ​It is similar to a correlation because it uses one sample to evaluate the relationship between two variables. ​It is similar to a single-sample t test because it uses one sample to test a hypothesis about one population. ​It is similar to both a correlation and an independent-measures t test because it can be used to evaluate a relationship between variables or a difference between populations

​It is similar to a single-sample t test because it uses one sample to test a hypothesis about one population.

One sample has n = 10 scores and a variance of s2 = 20, and a second sample has n = 15 scores and a variance of s2 = 30.What can you conclude about the pooled variance for these two samples?

​It will be closer to 30 than to 20.

What is indicated by a large variance for a sample of difference scores? a consistent treatment effect and a high likelihood of a significant difference​ ​a consistent treatment effect and a low likelihood of a significant difference ​an inconsistent treatment effect and a low likelihood of a significant difference ​an inconsistent treatment effect and a high likelihood of a significant difference

​an inconsistent treatment effect and a low likelihood of a significant difference

For which of the following situations would a repeated-measures research design be appropriate? ​comparing verbal solving skills for science majors versus art majors at a college ​comparing self-esteem for students who participate in school athletics versus those who do not ​comparing pain tolerance with and without acupuncture needles ​comparing mathematical skills for girls versus boys at age 10

​comparing pain tolerance with and without acupuncture needles

In general, what factors are most likely to reject the null hypothesis for an ANOVA?​ ​large mean differences and small variances ​small mean differences and large variances ​large mean differences and large variances small mean differences and small variances​

​large mean differences and small variances

Which of the following samples will have the smallest value for the estimated standard error? ​n = 100 with s2 = 400 ​n = 25 with s2 = 100 ​n = 25 with s2 = 400 ​n = 100 with s2 = 100

​n = 100 with s2 = 100

What would the scatterplot show for data that produce a correlation of +0.88? ​points clustered close to a line that slopes down to the right ​points widely scattered around a line that slopes down to the right ​points widely scattered around a line that slopes up to the right ​points clustered close to a line that slopes up to the right

​points clustered close to a line that slopes up to the right

Which of the following describes a typical distribution of F-ratios? ​symmetrical with a mean of zero ​negatively skewed with all values greater than or equal to zero ​symmetrical with a mean equal to dfbetween ​positively skewed with all values greater than or equal to zero

​positively skewed with all values greater than or equal to zero

​Assuming that SSY is constant, which of the following correlations would have the largest SSerror? ​r = -0.10 ​There is no relationship between the correlation and SSresidual. ​r = -0.70 ​r = +0.40

​r = -0.10

How does the difference between fe and fo influence the outcome of a chi-square test? ​the larger the difference, the smaller the value of chi-square, and the greater the likelihood of rejecting the null hypothesis ​the larger the difference, the larger the value of chi-square, and the greater the likelihood of rejecting the null hypothesis ​the larger the difference, the larger the value of chi-square, and the lower the likelihood of rejecting the null hypothesis ​the larger the difference, the smaller the value of chi-square, and the lower the likelihood of rejecting the null hypothesis

​the larger the difference, the larger the value of chi-square, and the greater the likelihood of rejecting the null hypothesis

A sample of n = 4 scores is selected from a population with an unknown mean. The sample has a mean of M = 40 and a variance of s2 = 16. Which equation correctly describes the 90% confidence interval for μ?

​μ = 40 ± 2.353(2)

Which of the following is the correct null hypothesis for an independent samples t test?

​μ1 - μ2 = 0


Related study sets

Week 3: Evaluating & Selecting Alternatives

View Set

Accounting 2 Final (Chapter 21-22 Quiz Questions)

View Set

Fundamentals of Nursing Ch: 3 Health, Illness, and Disparities

View Set

US History Chapter 7: Launching the New Nation

View Set

Nursing Care of the Post-partum Family

View Set

Final Exam HK 376 Chapters 15-21

View Set