Exam 4 Study Guide (Chapters 12-15)

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

How many F ratios are obtained for a two-factor ANOVA?

3 different: Factor A, Factor B, A x B interaction Each f ratio is independent of the others

Final Exam: What does the chi-square test for independence measure? In what two research situations is it used?

Can be used/interpreted in two ways: 1. Testing hypotheses about the relationship between two variables in a population 2. Testing hypotheses about differences between proportions for two or more populations

Final Exam: How are degrees of freedom computed for the chi-square test for independence?

Df = (C-1)(R-1) (C = total # columns, R = total # rows)

Final Exam: How are degrees of freedom computed for a chi-square test for goodness-of-fit?

Df = C -1 (C = # of columns)

Define cell.

Each Individual treatment condition is represented by this in the matrix of two-factor designs

What are three advantages of a factorial design?

Economy: 1. The design provides more information for the same amount of work Experimental Control: 2. Potential extraneous variable is included as factor in the design, this removes an added source of variability Interactions 3. Factorial design is the only way that we can investigate the interactions among independent variables (the effect of an independent variable rarely occurs in isolation, many variables are operating simultaneously in the "real world" and this design allows us to investigate more realistic situations)

If the null hypothesis is true, what is the expected value of F?

F-ratio is close to 1.00

Final Exam: How is expected frequency computed for each cell in the chi-square test for independence?

Fe = (row total)(column total) / n (n is total number of participants in the study)

Final Exam: What types of data are used in chi-square tests?

Frequencies (rather than numerical scores, like in t and ANOVA tests), can be used with data form a nominal or ordinal scale

State the null hypothesis state for a single-factor (one-way) ANOVA.

H0: μ1 = μ2 = μ3 States that for the general population, there are no mean differences among the treatments being compared

State the null hypothesis for a main effect in a two-factor ANOVA.

H0: μa1 = μa2 H0: μb2 = μb2

State the alternative hypothesis for a single-factor (one-way) ANOVA?

H1: at least one mean is different from the others

What symbol is used to represent a Pearson product-moment correlation? AKA Pearson Correlation

Identified by letter r

If the results of a two-factor ANOVA are presented in a graph, what would indicate the presence of an interaction between variables?

If lines cross (are not parallel) or move towards each other = presence of interaction between variables If lines are parallel = no presence of interaction between variables

How do we decide whether to reject or fail to reject the null hypothesis for an interaction in a two-way ANOVA?

If the Fobt is greater than the Fcv = reject the null hypothesis

Correlation used for/t tests used for

If we are only interested in the relationship between the two variables If we have independent variable (or quasi-independent variable) and dependent variable/appropriate for experiments with only one independent variable and only two levels (conditions) of that independent variable.

Define factor.

Independent variable Ex: single-factor or one-way ANOVAs = one independent variable two-factor ANOVA = two levels or conditions or two factors

How is MS computed for each component of the two-factor ANOVA summary table?

MS = SS/df Do individual for each: Factor A, Factor B, A x B interaction, Error Term (within)

In what research situation is it most appropriate to perform post hoc multiple comparisons, (such as the Scheffe test or the Bonferonni test)?

Scheffe test is an example of a post test. Conducted after an ANOVA where H0 is rejected with more than two treatment conditions, 3 or more conditions, (if you fail to reject the null hypothesis, you don't have to do any post tests).

What are the components of an ANOVA summmary table?

Source (between, within, total), SS, df, MS (mean square), F

What information is conveyed via the absolute value of a correlation coefficient?

The degree (AKA magnitude/strength) of the relationship. 1.00 = perfect relationship 0 = no relationship Both of these extremes are unlikely in the "real world"

In what research situation is it most appropriate to perform a one-way ANOVA?

Used to determine whether there are any statistically significant differences between the means of THREE OR MORE independent (unrelated) groups

What symbol represents the number of treatment conditions in a one-way ANOVA?

k

How are dfbetween computed for a one-way ANOVA?

k-1 k= # of treatment groups/conditions

What abbreviation is used to represent the number of subjects within a particular treatment condition?

n

How is this measure of effect size computed for a one-way ANOVA?

n2 = SSbetween/SStotal

In what research situation is it appropriate to perform a within-subjects ANOVA? AKA Repeated-measures ANOVA or ANOVA for correlated samples

Equivalent of a one-way ANOVA, but for related (not independent) groups

What three omnibus tests are conducted in a two-factor ANOVA?

Factor A Factor B A x B

How is this measure of effect size computed for each main effect and the interaction from a two-way ANOVA?

Factor A: n2 = SSA/SStotal Factor B: n2 = SSB/SStotal A x B: n2 = SSAxB/SStotal

What information is conveyed via the sign of a correlation coefficient?

The direction of the relationship. Positive= as one variable increases, other variable increases. Negative = as one variable decreases, other variable decreases

What does the omnibus test measure?

The overal ANOVA that is first conducted to establish that a differences exits between means (it does not indicate exactly which treatments are different)

How do we determine whether a correlation is statistically significant?

robt > rcv

Define interaction.

Measures the "extra" mean differences that exist after the main effects for factor A and factor B have been considered Occurs whenever the mean differences between individual treatment conditions, or cells, are different from what would be predicted from the overall main effects of the factors Often two factors will "interact" so that specific combinations of the two factors produce results (mean differences) that are not explained by the overall effects of either factor, in other words, THE EFFECYS OF ONE FACTOR DEPEND ON THE LEVELS OF ANOTHER FACTOR ------ these "extra" mean differences are the interaction

How are dfwithin compute for a one-way ANOVA?

N-k N = total # of subjects k= # of treatment groups/conditions

Final Exam: State the null and alternative hypotheses for the chi-square test for independence.

Null hypothesis for this test states that: there is no relationship between the two variables (the two variables are independent) OR the proportions (across categories) are the same for all of the populations (no difference)

Final Exam: State the null and alternative hypotheses for the chi-square test for goodness-of-fit.

The null hypothesis specifies the proportion of the population that should be in each category, the proportions from the bully hypothesis are used to compute expected frequencies that describe how the sample would appear if it were in perfect agreement with the null hypothesis, we want to know whether or not our observed data matches our expected data H0: shows how the sample would look if the data were in perfect agreement with the null hypothesis H1: the distribution has a different shape than what is specified by the H0

What equation is used to compute the slope in a regression equation (to predict Y)?

b = EXY - (EXEY)/n / (EX2 - (EX)2/n)

How are df computed for each component of the two-factor ANOVA summary table (dfbetween, dfA, dfB, dfAxB, dfwithin, and dftotal)?

dfbetween = ab - 1 (a = # of factor a levels, b = # of factor b levels) dfA = a -1 dfB = b -1 dfAXB = (a -1)(b-1) dfwithin = N - ab dftotal = N -1

Define level.

Number of groups (conditions or treatment conditions) within a factor (number of groups within a specific independent variable)

State the alternative hypothesis for an interaction in a two-factor ANOVA.

The effect of factor a does depend on factor B

State the null hypothesis for an interaction in a two-factor ANOVA.

The effect of factor a does not depend on factor B

Final Exam: What equation is used to compute chi-square statistics?

X2 = Σ ((f0-fe)2 / fe )

What equation is used to compute the Pearson correlation?

r = EXY - (EXEY/n) / √(EX2 - (EX)2/n) (EY2 - (EY)2/n) Compare amount of covariability (variation from the relationship between X and Y) to the amount X and Y vary separately

What equation is used to compute the intercept for a regression equation (to predict Y)?

a = ȳ - (bx̅) OR Aa = My - (bMx)

State the alternative hypothesis for a main effect in a two-factor ANOVA.

H1: μa1 =/(does not equal) μa2 H1: μb2 =/(does not equal) μb2 If have more than two levels for a factor, the alternative should be stated as, "at least one of the means differs from the others"

What abbreviation is used to represent the total number of subjects used in an ANOVA?

N

In what research situation is it appropriate to perform a factorial ANOVA?

A factorial ANOVA has two or more independent variables that split the sample in four or more groups (A one-way ANOVA has one independent variable that splits the sample into two or more groups)

What are the possible values for a correlation?

A value between -1 and 1 (-1 and 1 indicate a perfect correlation, all pints would lie along a straight line in this case)

What does ANOVA stand for?

Analysis of Variance ANOVA is necessary to protect researchers from excessive risk of a Type I error, in situations where a study is comparing more than two population means. ANOVA allows researcher to evaluate all of the mean differences in a single hypothesis test using a single alpha level and this keeps the risk of a Type I error constant no matter how many different means are being compared.

Final Exam: What symbol is used to represent chi-square?

X2

How are main effects and interactions related (or unrelated) to one another?

An interaction between two factors occurs whenever the mean differences between individual treatment conditions, or cells, are different form what would be predicted from the overall main effects of the factors

Final Exam: What is the basic question that chi-square tests are designed to answer?

Computed to measure the discrepancy between the ideal sample (expected frequencies = fe) and the actual sample data (the observed frequencies = fo). A large discrepancy = a large value for chi-square

When is it appropriate to compute post hoc test (multiple comparisons)?

If we reject the null hypothesis after an overall ANOVA, we move on to multiple comparisons (i.e. Post tests or post hoc tests) For m ore than two treatment groups, you must follow the ANOVA (or omnibus test) with additional tests to determine exactly which treatments are different and which are not - these tests are called post tests or post hoc tests -conducted after an ANOVA where H0 is rejected with more than two treatment conditions, the tests compare the treatments, two at a time, to test the significance of the mean differences

What is the purpose of a correlational analysis? In what research situation is it appropriate to perform a correlational analysis?

Measure and describe relationships between two variables. A relationship exists when changes in one variable tend to be accompanied by consistent and predictable changes in the other variable

How are degrees of freedom computed for the Pearson correlation?

n - 2 = df n = # of participants or # of points on scatter plot

When is it necessary to use ANOVA rather than a t statistic?

ANOVA can be used in situations where there are TWO OR MORE MEANS being compared, whereas the t tests are limited to situations where only two means are involved. And the alpha level stays constant at 0.05. ANOVA is used for comparisons of more than 2 levels of an independent variable, usually three or more (Ex: alcohol vs. placebo vs. control) OR studying the effect of more than one independent variable at a time on a dependent variable (i.e. "Factorial design"). Although you can use ANOVA for two groups, a t-test is simpler for just 2 groups. t-tests are appropriate for: -Experiments with only one independent variable -only two levels (conditions) of that independent variable

State the null and alternative hypothesis for using the Pearson correlation in hypothesis testing (assume a non-directional/two-tailed hypothesis test).

H0: ρ = 0 (no relationship) H1: ρ =/(does not equal) 0 (relationship)

What are the sources of variance for ANOVA? What are the sources of between-group variance? What are the sources of within-group variance?

Source of variance for ANOVA - sources of between-group and within-group Sources of between-group variance = variance explained by some component of error + variance due to our actual treatment effect (numerator of f ratio) ---- two factor further broken down in variance from factor A, variance from factor B, and interaction variance Sources of within-group variance = variance explained by sampling error, random chance, and individual differences (denominator of f ratio)----erro variance that is unpredicted differences due to error

Define main effect.

The effect of one independent variable on the dependent variable. It ignores the effects of any other independent variable. In general, there is one main effect for each dependent variable. The mean differences among the levels of one factor Main effect for factor A: Obtained by computing the overall mean for each row in the matrix Main effect for factor B: obtained by computing the overall mean for each column in the matrix When the design of the research study is represented as a matrix with one factor determining the rows and the second factor determine the columns, then the mean differences among the rows describe the main effect of one factor, and the mean differences among the columns describe the main effect for the second factor

Define the term coefficient of determination.

r2 = coefficient of determination = "variance accounted for" (measure of effect size)

What are the characteristics of a factorial design? AKA two-factor ANOVA (when only two independent variables)

-More than one independent variable (or quasi-independent variable). -Two or more levels (conditions) for each independent variable. *studying the effect of more than one independent variable at a time on a dependent variable

How do we obtain the critical value for an F ratio?

-Select an alpha level -find the df values for the numerator and denominator of the f-ratio (will have two df values - one for numerator and one for denominator) - between-treatments: df = (k-1), within-treatments: df = (N-k) k= # of treatment conditions N= total # of participants -consult the f-distribution table to find the critical value

How do we decide whether to reject or fail to reject the null hypothesis for a main effect in a two-way ANOVA?

Compare F to critical values and make decisions about whether to reject (or fail to reject) each null hypothesis If the Fobt is greater than the Fcv = reject the null hypothesis

What measure of effect size is used for ANOVA?

Compute the percentage of variance accounted for by the treatment conditions n2 = SSbetween/SStotal

How is each F ratio in a two-factor ANOVA computed?

Divide each MSbetween value by MSwithin for each: Factor A, Factor B, and A x B interaction MSA/MSwithin MSB/MSwithin MSAxB/MSwithin

What does MS represent? (State the term and define it).

MS stands for mean squares - the sample variances Mean of the squared deviation scores

How are MSbetween and MSwithin computed?

MSbetween = SSbetween/dfbetween MSwithin = SSwithin/dfwithin

How is an F ratio computed?

MSbetween/MSwithin MSbetween = measures size of mean differences MSwithin = measures magnitude of differences expected with no treatment effect F = treatment effect + chance/error (MSbetween) / chance/error (MSwithin) *chance/error cancels out, so F=treatment effect

Final Exam: What are non-parametric tests? How do they differ from parametric tests?

Non-parametric tests are chi-square tests (x2) - this means that the chi-square tests do not require assumptions about population parameters, nor do they test hypotheses about population parameters Parametric tests (like t tests and ANOVA) do include assumptions about parameters and hypotheses about parameters

What is the simplest possible factorial design? What is the factorial notation for this design?

Simplest is a 2 x 2 (A x B) -- two factors (independent variables) and two levels of each factor (two conditions for each independent variable) Two factors are identified as A and B Factorial design has MORE THAN 1 independent variable (or quasi-independent variable) And TWO OR MORE LEVELS (conditions) for each independent variable

What is the purpose of a regression analysis? In what research situation is it appropriate to perform a regression analysis?

To predict the value of one variable given knowledge about another variable (The best predictor of future behavior is the average of past behavior)

Final Exam: What does the chi-square test for goodness-of-fit measure?

Uses frequency data from a sample to test hypotheses about the shape/proportions of a population Each individual in the sample is classified into one category The data, called observed frequencies, count how many individuals are in each categoryu

What values are possible for an F ratio?

We can never have negative numbers, because we are dealing with variance Can never have less than zero variability in a sample (MS can also never be negative)

Define alpha inflation.

When doing t tests, when you do more and more t tests you actually inflate the alpha level. Each time you do a test, you take a 5 % risk of a type 1 error, so will have an error rate greater than .05. The main problem that designers of post hoc tests try to deal with is this. It refers to the fact that the more tests you conduct at alpha = 0.05, the more likely you are to claim you have a significant result when you shouldn't have (i.e. A Type I error). Sometimes also called familywise error or experimentwise error, occurs because multiple tests are conducted on the same data set. When too many tests are conducted, the original alpha value for each test is actually higher than expected.

How do we decide whether to reject the null hypothesis for a single-factor (one-way) ANOVA?

When the sample data produces a large f-ratio, we reject the null hypothesis and conclude that there are significant differences between treatments (if f-ratio is close to 1.00 = null hypothesis is true and we fail to reject the null hypothesis)

What equation is used to predict Y in a regression equation?

ŷ = bx + a (Final Form of a Regression Equation) ŷ = criterion variable, what we are trying to predict b = slope value a = intercept


Kaugnay na mga set ng pag-aaral

Ethical Accounting - C03 - Organizational Ethics and Corporate Governance

View Set

Chapter 14 - Marketing Channels & Supply-Chain Management

View Set

Unit 2 Recognizing quotes (match the quote with the speaker)

View Set

Chapter 11 Pediatric Neurodevelopment Disorders

View Set