Psy 320 Final

¡Supera tus tareas y exámenes ahora con Quizwiz!

When to use a Goodness of fit (one-sample) Chi Square versus a Test of Independence (two-sample) Chi Square

-One-sample/Goodness of Fit: Use when testing one factor (with multiple levels), just like with a one-sample ANOVA. It means that you're finding out whether or not the data you collect is equally distributed for each level of the factor. (Not running this test on SPSS) -Two-Sample/ Test of Independence: Use when testing two factors. With this kind of test you would try to find out whether the two factors are independent of each other or not. If they're independent of each other, that means there's no relationship between the two factors (they're unrelated to each other) and you retain the Ho; it's not significant.

When plotted on a graph, if the lines representing the effects of two variables are parallel, do you have: an interaction effect, no interaction effect, possibly an interaction effect, or cannot be determined?

-no interaction effect: there's only an interaction effect when the graph shows the two lines intersecting. If they're parallel, there's no way they can intersect -Other possibilities: the lines are parallel and far apart, so there's a main effect for Factor A. OR: the lines are close and tilted (not parallel), so there's a main effect for Factor B. OR: Both.

If you have 30 respondents identifying their political preference (i.e., Democrat, Republican, Independent), what would the expected frequency be for each category? (the choices are 10, 20, 30 , or 40)

10 Expected frequency is the frequency you would expect to see if the Ho is true. I don't need to know how to calculate it, but I need to know the concept. So the Ho says that all the frequencies would be proportionally equal. Since there are 3 levels for the political preference factor, and there are 30 respondents, you'd expect that all of the data would come out to a mean of 10 in each cell. The other answers wouldn't really make as much sense.

In order to meet the sample assumption associated with parametric statistics, how many subjects do you need: 20, 30, 50, or 100?

30 Why? Beats me. Something to do with the assumption of normal distribution and random sampling.

If you find that income significantly varies on the basis of gender, you have:

A main effect: income is being effected by gender, so gender is an independent variable; income is the dependent variable. So you find a main effect, and no results about an interaction at all

What do you put in as the third (dependent) variable on SPSS in variable view with: ANOVA? Chi Square?

ANOVA: "Score"- checking for two main effects, 1 interaction. Do a "univariate" test Chi Square: "Frequency"- must weight cases by frequency, then do "Crosstabs". frequency is also the difference between ANOVA and Chi Square. Frequency involves proportions in the data which show up as a single number in each cell. ANOVA involves multiple numbers in each cell. With Chi Square, you're just checking for whether or not the two factors are independent of each other. If they're dependent of each other, it means there's some kind of relationship/interaction between the two factors.

What's the Greek symbol for Chi? How about the symbol used for ANOVA?

Chi: X; Chi squared would be: X^2 ANOVA: F

True or False? Parametric statistics are preferred when you are analyzing categorical variables.

False Categorical variables are nominal (names) or ordinal (# ranking), noncontinuous/limited variables. Pearson correlation tests need continuous variables. ANOVA needs a continuous dependent variable. Nonparametric tests like chi square don't need to follow as many rules so they can use ordinal variables.

What does it mean if there's an independent relationship found after conducting a Chi Square test for independence? How about if there's a dependent relationship?

Independent relationship: It means there's no relationship/ we retain the Ho Dependent relationship: there is a relationship/ we reject the Ho/ looks like an interaction

Other name for "factor"

Independent variable

When analysis of data reveals a difference between levels of a factor, what is this called?

Main effect: each factor has some levels. When you look at one of the factors (Factor A) and cover up the other (Factor B), you may find that its levels have different means. This indicates that Factor A's overall mean is statistically significant

A factorial analysis of variance can be used to make a judgment about the magnitude of an observed difference. True or False?

Other wording: you can use a two way ANOVA to judge whether an observed difference is statistically significant or not. True or False True- it's one of the tests of significance you can use to see if the observed difference was caused by chance like the Ho says (all means are equal) or if it's caused by something more interesting like the alternative hypothesis says (at least one of the means isn't equal to the other). It's possible one of the factors has a main effect, or that there's an interaction going on

Results from an ANOVA are placed in what type of table?

Source table: Idk if we need to know this or not. But with an ANOVA test, you have to go back to the SOURCE for answers

True or False? The one sample chi-square is used to determine whether the distribution of a single categorical variable is significantly different from that which would be expected by chance.

True A one-sample chi square (a nonparametric test) is also called a "goodness of fit chi square test". It helps you test how well your sample data fits the population distribution. In the statement, what would be expected by chance is that the sample pretty much fits the population distribution. And that's exactly what you would be testing.

A factorial ANOVA is which type of analysis: univariate, bivariate, multivariate, or exivariate

Univariate: remember that you click "univariate" when doing a factorial/two-way ANOVA on SPSS. It's univariate because we're testing with only one dependent variable

When would you use a factorial ANOVA vs a simple ANOVA to test the significance of the difference between the average of two or more groups?

Us a factorial ANOVA (also called two way ANOVA) when testing >1 factor/independent variable. With this type of test, you're able to test main effects and interaction effects of the factors. (main effects apply to each of the independent variables/factors. Interaction applies to the factors together) Use a simple ANOVA (also called one way ANOVA) when testing 1 factor/independent variable

When do you use a post hoc/multiple comparisons test (Bonferoni, Tukey)?

Use a post hoc test when you do an ANOVA or t test (which involves the testing of multiple sample means) and you get p<.05. This means that the alternative hypothesis: at least one of the means aren't equal, is true. But which mean? All of the means? One of them? Post hoc will tell you.

What does a factorial (two-way) ANOVA tell you?

it tells you whether there's a main effect for either of the factors/ whether there's an interaction effect between the factors.

With both kinds of Chi Square tests, the closer the observed and expected values are to one another, the ______ likely the dimensions (factors) are to be independent of one another.

more This is because when the observed and expected values are closer, the Ho is more likely to be true. The Ho states that the factors are independent of each other. So it's more likely that the factors are independent of each other when you see that the observed and expected values are close together. Vice versa is true, also: the less similar the observed and expected values are to one another, the less likely the factors are to be independent of one another. (The less likely it is the the Ho is true)

Statistics that are not governed by the parameters of the population are referred to as ___________ statistics.

nonparametric

How do you write that something is statistically significant?

p< .05 The probability that there will be a Type 1 error (that we made an error in rejecting the Ho) is less than 5%. That means it's safe to reject the Ho; we most likely made the right choice. If any of the p values on SPSS are less than .05, they're statistically significant.

X^2obtained=0 when frequency obtained (<,>,= to, or not = to) frequency expected.

when frequency obtained = frequency expected. This is because you're subtracting those two things at the numberator of the equation. If the numerator just equals 0, then X^2 (chi square) will equal 0, no matter what the denominator is.


Conjuntos de estudio relacionados

BHIS 480 Exploring Management Wiley Plus

View Set

Ch. 4: Polar & Non-polar covalent bonds

View Set

Chapter 17- Health Insurance Provisions

View Set

Topic 11: Musculoskeletal and Integumentary Systems

View Set

LWW - Ch. 47: Mgmt of Patients With Intestinal and Rectal Disorders

View Set

Maternity (Intrapartum)---Nclex Saunders questions

View Set

Excel Chapter 2, Excel Chapter 1

View Set