PSY 350 - Research and Design II (Exam 3)

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

If you don't find significant differences between one variable and another, what should you not write?

"There was no significant difference between the IV and the DV." "There was no significance in the data". You should be specific. Also remember that we don't *accept* or *prove* either the null or alternative hypthesis - we reject or fail to reject the null.

An example of a contribution might be:

"This study established that statistics and methodology courses are in fact important for psychology majors' post-graduate success."

What is a *good* summary of findings for a *Discussion*?

"We found that grades in PSY 350 significantly predicted post-graduation salaries."

How do we determine degrees of freedom for a correlation? Explain the formula.

(n - 2)

Give some examples of possible limitations.

*INTERNAL VALIDITY:* Being unable to control important individual differences. Realizing after the fact that you had a confounding variable, or a plausible alternative explanation. Having too little statistical power or too much noise in your data to find results. *EXTERNAL VALIDITY:* Having an unrepresentative sample. Having measures that are far removed from the real world. Having manipulations that are too weak or don't work.

What are the steps of a correlation hypotheses? Explain each step.

*Step 1: State your hypotheses.* The null hypothesis for a correlation is that there is *no correlation* at the population level. The simplest alternative hypothesis *is* that there is a correlation, or we could make a *directional* prediction. *Step 2: Choose your a level and find the critical region.* TO test the significance of a correlation, we use a variable of the *one-sample t-test*. So we use the t distribution, and we calculate degrees of freedom just as we do for t-tests. We can use this to look up the critical t value for our sample size, or we can allow our software to do it and calculate the exact p-value. *Step 3: Calculate your test statistic.* We test the significance of a correlation by using the *one-sample t-test* framework, but compare the *sample correlation* to the *population correlation* instead of the sample mean and population mean. If we predict that p = 0 under the null hypothesis, then our test is simpler. (See lecture ___ for formulas.) _ *Step 4: Make a decision.* We can look up the critical t value in a t-table, or use the exact p-value from our software. If we made a directional prediction, we use a one-tailed test. If we made a nondirectional prediction, we use a two-tailed test. UNFINISHED (she goes so dang fast).

What are the *three* different hypotheses we can have in a two-factor design?

1. Does IV #1 have an effect on the DV? (Main effect for Factor A) 2. Does IV #2 have an effect on the DV? (Main effect for Factor B) 3. Do IV #1 and IV #2 interact to affect the DV? (Interaction between Factor A and Factor B)

Describe the ANCOVA process.

1. Start with a *regression* model predicting your DV from your covariate(s). (Week 12, p. 38)

What is the relationship between a correlation and an effect size?

A correlation is its own effect size. Correlations further from zero are stronger; correlations closer to zero are weaker.

What does a larger b (regression coefficient) tell us? What does a smaller b tell us?

A larger b = a change in the predictor corresponds to a large change in the outcome. A small b = a change in the predictor corresponds to a small change in the outcome.

When does a limitation matter?

A limitation only really matters if it would *change the way we interpret our results*.

The *test statistic* for regression is a variant of:

ANOVA Hint: Because we look at a f-statistic to know if it's significant, it's a variation of an ANOVA

What are examples of continuous variables?

Age Height Scores on individual difference measures: Personality, intelligence, attitudes, values, etc. Time

What *limitations* could change our results?

Alternative explanations Confounding variables Moving from a narrow sample to a much broader sample Moving from one narrow population to another narrow population Weak manipulations, or manipulations that backfire Poor measures (too much error—a lack of reliability—to be meaningful; not aligned with our theoretical understanding of the variable) *The importance of any limitation depends on our specific study.*

When you report the results of a regression analysis, you need to talk about what?

Always the significance of the overall regression, and the significance of the individual coefficients if the overall regression is significant.

Give an example of a mediation.

An ideas: The effect of learning in the entrepreneurship courses on intentions to start a small business is *mediated by* self-efficacy.

When should we use correlations?

Any time your *predictor variable is continuous*. Any time you have at least one continuous variable and *no* strong expectations about *which one predicts the other*. Correlations are *symmetrical*; it doesn't matter which variable comes first. Remember, correlation is necessary but not sufficient for causation, in part because we don't know which variable comes first. This is *not* true for t-tests or ANOVA, where we can't switch the grouping variables and the outcome variable. When we do have an idea about which comes first, we can use *regression*, which is not necessarily symmetrical.

What does a (our *intercept*) tell us in a regression analysis?

As our intercept, it tells us what value we'd expect for the outcome if the value of the predictor was zero.

What does b (our *regression coefficient*) tell us?

As the slope of our regression line, b tells us how strong the relationship between our predictor and our outcome.

Don't expect extremely high correlations between different things, even if there is greater theoretical reason to expect that they are related. Why?

Because these are complicated outcomes. No one variable is going to explain all of the variances in another. Expecting correlations in the .50_ range is probably too high - most well-established relationships in psychology are more than .20 - .30 range.

When should you be concerned about the *fit* of a regression model?

Before you interpret any coefficients.

A standardized regression coefficient is:

Beta

ANCOVA is for what kind of variables:?

Both continuous and grouping variables.

How can we minimize our Type I error rate when testing multiple hypotheses?

By testing all of our hypotheses in one analysis, we keep our Type I error rate under control, because we set an a level for the whole test, all together.

How do we calculate a *standardized regression coefficient*?

By transforming all of the raw scores to z-scores before we begin our analysis. We call this Beta.

What is Stage 1 of F tests in one analysis? (Week 9, p. 53)

Calculating between-treatments variability and within-treatments variability from our total variability.

What is Stage 2 of F tests in one analysis? (Week 9, p. 53)

Calculating factor A's variability, factor B's variability, and our interaction variability from our between-treatments variability.

When we want to study a variable that doesn't neatly break participants into groups - a *continuous variable* - what do we use?

Correlations and correlation-based methods (e.g. regression).

Which of these would be an appropriate interaction? A. Social pressure will increase financial incentives. B. the effect of financial incentives will be greater than the effect of social pressure. C. people will make the most environmentally friendly choices when they have social pressure AND financial incentives. D. The effect of financial incentives will be stronger when social pressure is also present.

D. The effect of financial incentives will be stronger when social pressure is also present.

When talking about your strengths, what should you do, and what should you not do?

DON'T oversell. DO tell the reader what you did well. Persuade them that your study was worth doing. DO talk about what you could have been better, but you need *balance*. Surely there was something you did right.

Dr. Gibbons wants to know whether you learned anything this semester. If she had thought to conduct a pre-test, she could have answered this question with a:

Dependent-samples t-test

Study Overview on page 52 of Lecture 12.

Dew it.

If the lines cross, what kind of interaction do you have?

Disordinal

In ANCOVA, we get F statistics for what?

Each main effect, each covariate, and the interaction (if we have 2 factors).

How do we set up a two-way ANOVA table?

Expand your ANOVA table to include the SS, df, MS, F, and p for *each of your three tests*. Divide into "between" and "within" first, then break down the "between" variance. If you like, you can add an asterisk to help the reader quickly see which p-values are less than your a level. (Just one asterisk is fine, remember that the smaller p-values don't necessarily mean bigger or more important effects).

What is a *post hoc analyses*?

Extra analyses that rule out alternative explanations. They're not a part of your hypotheses, and they are not the same as *post hoc tests* that follow an ANOVA.

When we write about ANCOVA, we should include the results of all of our F Tests, including:

F statistic Degrees of freedom (regression, residual OR between, within) p-value for the overall F

What do we report when we write about a regression?

F statistic Degrees of freedom (regression, residual) p-value for the overall F Effect size (r^2) SE of the estimate The coefficients in the regression equation (b for every predictor, and often alpha, beta and the p-value for each predictor)

True or False: It is *never* approrpiate to report new analyses in the Discussion section of your paper.

False

True or False: Moderating effects are often large.

False. Moderating effects are often small. They need large samples to find *statistically* significant effects.

If I find that lima bean consumption predicts well-being, with a b of -.50, I can conclude that:

For every 1-bean increase in lima bean consumption, well-being declines by .50

If the predictor is not presented at all in a regression analysis, what outcome are we likely to get?

For some predictors, a value of 0 is meaningful; for some (e.g. gender, personality) it's not.

If we have two factors and each factor has two levels, how many possible cobinations of factors do we have?

Four; this means we will have four conditions in our study. We often call this a "2 x 2 factorial design", because we have two levels of factor A and two levels of factor B.

What is the independent variable, dependent variable, and mediator in this example? Having friendships with people in an outgroup reduces prejudice toward that outgroup, because those friendships reduce anxiety and anxiety predicts prejudice.

Friendships = IV Anxiety = Mediator Prejudice = DV

What does slope in a linear model tell us?

How much of an increase (or decrease if it's negative) we can expect in Y for every one-unit increase (or decrease) in X.

What is HARKing?

Hypothesizing After Results Known "I knew this all along!"

What do you report if your effect size isn't significant? If it is?

If an effect isn't significant, you just say so. If an effect is significant, you need to report its *size* and *direction*.

What does r^2 for our regression model represent?

If we only have one predictor, this is literally just the correlation squared. Just like eta^2, this is interpreted as the percentage of variance in the outcome that is explained by the predictor.

The difference between correlational and experimental research is that:

In correlational research, you don't manipulate any variables.

If your overall effect is not significant, do you need to report the effect size?

In the real world, no, but for the purposes of this class, yes.

Having a confounding variable is a limitation because it decreases:

Internal validity

What do you focus on when you have a significant interaction?

Interpreting the interaction. An interaction changes the effect of your variables, so you view the main effects with caution because you might come to the wrong conclusions if you interpret the main effects when an interaction is present. (Week 9, p. 63)

How do you determine the strength of a correlation? / if a correlation is small or large?

It depends on what you're studying. There's not a minimum guideline or size that applies across all areas of psychology. You need context to interpret the size of a correlation.

Why is HARKing bad science?

It ignores the possibility of Type I error. It tends to capitalize on chance (and be difficult to replicate). It tends to make theories unnecessarily complicate.d It misleads the reader about the theoretical basis for your work.

What does using a straight line do?

It keeps things computationally and interpretation-ly simple, and implies that the relationship is consistent.

What is the independent variable, dependent variable, and mediator in this example? The effect of learning in the entrepreneurship courses on intentions to start a small business is *mediated by* self-efficacy.

Learning = IV Self-efficacy = Mediator Startup intentions = DV

What are limitations?

Limitations are *boundary conditions* as opposed to flaws. They also go back to internal and external validity.

To interpret the *direction* of your effect size, what do you look at? (ANCOVA)

Look at simpler statistics: For your factors, *means* (and post hoc tests if necessary) For your covariate, *correlation* or *regression* with the DV.

MORE INFORMATION NEEDED

MORE INFORMATION NEEDED

T-tests and ANOVA are used to compare what of different groups?

Means We're talking about grouping variables when we use these techniques. Grouping variables normally have a manageable number of levels.

When does moderation occur?

Moderation occurs when one variable changes the *effect* of another variable on an outcome. Moderation is the *same* thing as an interaction.

Should you code a truly *nominal* variable (e.g. race, religion, major, industry)?

No.

Tony Frank wants to argue that CSU students graduate with less student loan debt than the national average. He could make his case with a:

One-sample t-test

A study is designed to test whether there is a difference in mean daily calcium intake in adults with normal bone density, adults with osteopenia (a low bone density which may lead to osteoporosis) and adults with osteoporosis. Adults 60 years of age with normal bone density, osteopenia and osteoporosis are selected at random from hospital records and invited to participate in the study. Each participant's daily calcium intake is measured based on reported food intake and supplements. What kind of analysis will likely be used for this study?

One-way ANOVA

Say you have a group of individuals randomly split into smaller groups and completing different tasks, e.g., you're studying the effects of tea on weight loss and form three groups: green tea, black tea, and no tea. What type of analysis are you most likely to use?

One-way ANOVA

Say you have a study where individuals are split into groups based on an attribute they possess. For example, you might be studying leg strength of people according to weight. You could split participants into weight categories (obese, overweight and normal) and measure their leg strength on a weight machine. What type of analysis are you most liekly to use?

One-way ANOVA

Katie wants to test whether CSU students have better GPAs than CU students or UCD students. Katie should use:

One-way ANOVA Hint: know how many groups you'd be testing

When do we plot an interaction?

Only when it's significant.

If the lines do not cross, what kind of interaction do you have?

Ordinal

What does a positive b (regression coefficient) tell us? A negative b?

Positive b = a change in the predictor corresponds to an increase in the outcome. Negative b = a change in the predictor corresponds to a decrease in the outcome.

Why do post hoc analyses?

Post hoc analyses help you address the "what ifs". If no, that alternative explanation can't account for your results. If yes, that alternative explanation is plausible and should be explored further.

How do we step up a two-way table of means?

Put the levels of Factor A in rows and the levels of Factor B in columns. Use an extra column to the left and an extra row on top to include the factor names. Report the means for each cell (combination of your two factors) *and* row/column means.

What two analyses does an analysis of covariance (ANCOVA) combine?

Regression and ANOVA.

What outcome are regressions better for (when compared to correlations)?

Regressions are better for predicting outcomes. When we only have one predictor and one outcome, the *statistical significance* of a regression model is exactly the same as the corresponding correlation, but we can use regression to look at *multiple predictors at once* (something we can't do that with a correlation).

What are correlations and regressions good for?

Showing reationships among variables.

The Dean of Students Affairs wants to know whether students who participate in more extracurricular activities while in college give more to the Universtiy after they graduate. They could test this with a:

Simple linear regression

In science, why do you admit our limitations?

So that we don't over interpret our results. So that we (and others) can learn from them and do better next time.

When you find significant effects, how can you be clear about what kind of effects they are? Give examples.

State *differences between levels* of a grouping variable. State *relationships between variables* if the variables are continuous.

What are the four steps of testing the significance of a regression analysis

Step 1: State your hypothesis. Step 2: Choose your a level and find the critical region. Step 3: Calculate your t-statistic. Step 4: Make your decision.

What does beta in a standardized regression coefficient mean?

The correlation between our two variables (x and y) on the familiar scale from -1 to +1.

What three vairables does an interaction involve?

The dependent/outcome variable The *two* predictors/independent variables that interact to affect it

If you read only one paragraph of a journal article, it should be:

The first paragraph of the Discussion

What do you need to explain when you have a significant interaction?

The form of the interaction. You need to interpret the interaction and explain how it changed. You also need to discuss whether the form of the interaction that you got was what you expected to get or not.

Which axis do DV/predictor variables always go on?

The horizontal axis

If the lines are separated, which main effect is that consistent with?

The main effect of IV #2

What do we look at to see if our regression model fits?

The omnibus f-statistic

Which limitations should you focus on in your Discussion?

The ones that present the most plausible alternative explanations for your results.

What does a confidence interval tell you?

The plausible range of values for a statistic.

In ANCOVA, which step (theoretically) comes first: the regression step or the ANOVA step?

The regression step

What do you always report in a two-way (or more) ANOVA?

The same information that you would report for a one-way ANOVA: - Means - F Statistic - Degrees of freedom - p-value for the overall F - Effect size

Define *regression*.

The statistical technique for finding the best-fitting straight line to summarize the relationship between two variables.

Which axis do DV/outcome variables always go on?

The vertical axis

What's the difference between regression lines and correlations?

They may look similar and are, in fact, closely related, but correlations are *symmetric* (the correlation of A with B is the same as the correlation of B with A) while regressions are *not* (reversing the order of our variables will give us a different result). In regression, we need to designate a predictor (x) and an outcome (y).

Why do we use linear models?

They visually summarize the relationship and allow us to make predictions about one variable based on the other.

True or False: A moderating variable is the same thing as an interaction.

True

True or False: Whether you get a significant correlation or not depends on your sample.

True

True or False: A larger sample gives you an narrower (more precise) confidence interval.

True.

True or False: If we conduct three separate t-tests (or ANOVAs) to test hypotheses, our Type I error rates would add up depending on how many hypotheses we are testing.

True.

True or False: Researchers often report the correlations among all of their variables, including grouping variables.

True. This is an easy way for a reader to see major trends and patterns at a glance. It's also helpful for meta-analysis later on.

True or false: In a two-way ANOVA, we are interested in the effects of both factors on the same dependent variable.

True. The factor's don't affect one another, but we expect that they could both affect the dependent variable seperately or in an interactive way.

In a two-way ANOVA, how many factors do we have?

Two, but note that this is not the same thing as saying we have two levels of one factor. *Two factors* = two distinct grouping variables.

Joe Parker wants to know if his new video encouraging alumni to buy football tickets is effective, and if it might be more effective if it is paired with a free New Belgium beer. He could test this with a:

Two-way ANOVA

Say you want to find out if there is an interaction between income and gender for anxiety level at job interviews. What type of analysis are you most likely to use? What is your *outcome* variable? What are your *grouping* variables?

Two-way ANOVA The *anxiety level* is the outcome variable (or the variable that can be measured). *Gender* and *income* are the two grouping/categorical variables.

What might be a *strength* of a study?

We used well-established, validated measures of all of our variables.

What do we mean by "*best fitting*"?

We want a line that is, on average, closest to all of the data points. In other words, we want to *minimize the sum of squared distances* between each point and the line. We'll call this the *least squares solution*.

What are we trying to accomplish with a regression analysis?

We're trying to find the straight line (linear equation/linear model) that *best fits* the data, and then evaluate *how well* that line fits the data.

What questions do regression analyses answer in relation to slope?

What linear equation best describes the data? Once we've found the equation, how well does it describe the data?

What do you tell your reader in your Discussion section?

What you found What you think it means Why you think it matters What they should keep in mind when interpreting the results How your findings might be of practical value What the next steps are for future research

What do you include in the first few paragraphs of your Discussion section?

What you want your reader to take away from your study.

When should you conduct post hoc analyses?

When they might help you rule out a plausible alternative explanation

When should we use a regression?

When we can clearly identify a predictor and an outcome. When we have (or we expect to have) more than one predictor.

When do we use a *simple linear regression*?

When we have one predictor and one outcome.

When are correlational techniques appropriate?

When you have continuous variables

What is correlational research?

Whenever we study variables *as they naturally occur*, without manipulating or changing them, we are conducting this kind of research. Correlational research often involves more statistics than the simple correlation. But the more ocmplex analyses are based on the correlation, so we'll start there. As a reminder: don't use "correlation" or "correlated" unless you are talking about the statistic. "Correlational" is a little bit broader.

Can moderating variables be either group variables or continuous variables?

Yes

Can we have ANOVA with three or four factors and more than one independent variable?

Yes, but in the class we'll be sticking to the simplest case: two factors, independent measures, and equal sample sizes.

Can we split continuous variables into a few groups? Explain.

Yes, but we lose a lot of information when we do so. We have less statistical power, and our results are harder to find and understand. Correlation-based techniques let us use all of the information in these measures.

Is the significance test for a correlation sensitive to sample size?

Yes, very. Adding or subtracting just a couple of people can mean the difference between rejecting and not rejecting the null.

Can you correlate *ordinal* grouping variables with more than 2 categories by coding the variables?

Yes.

Is a regression a linear model? If yes, what does this mean?

Yes. This means that we represent the relationship between two variables with a straight line?

Should your Discussion section be able to stand alone?

Yes. It may be the only section someone reads.

If you find a nonsignificant correlation, stop.

You don't have enough information to be confident that there really is a correlation, so don't interpret any further. Your coefficient could just be sampling error; you don't have enough data to answer your question.

What are the risks of Post Hoc analyses?

You may not have exactly that data you want, and there's a risk of capitalizing on chance.

Why is the scale of your vertical axis important?

Your vertical axis can become misleading depending on what variables you use. Best practice is to set your axis to match the real possible values of your scale, unless they are just ridiculously large.

An ANCOVA tests the effects of grouping variables ________ accounting for continuous variables.

after

You can correlate a continuous variable with a ______________ variable (2 categories) by coding the second variable as 0 or 1.

dichotomous

A mediating variable:

explains or facilitates the effect of another variable.

If you manipulated whether drivers listened to popular or classical music while completing a driving course, you can talk about *music preference*:

in the Discussion, but not in the Results

In an ANCOVA, the grouping variables are usually __________________ (independent), and the continuous variables are usually _______________.

manipulated; measured

Your Discussion section should be _____________ than your Results.

more abstract and speculative

The test statistic to determine whether a correlation is statistically significant is a slight varation on the:

one-sample t-test

Correlation-based techniques (corrrelation and regression) are appropriate when your ___________ is ___________:

predictor; a continuous variable

The effect size for a correlation is:

r A correlation is its own effect size.

If a post hoc analysis finds a significant result, you should:

report it in your Discussion and explain that it was a post hoc

A regression coefficient is the:

slope of the regrssion line

You want to study whether a distracted driving intervention changes the frequency of self-reported texting and driving. You assign some participants to the intervention group and some to the control group. Which type of analysis are you most likely to use?

t-test

If you are working with *grouping* variables, which family of analyses do you want?

t-tests/ANOVA These are fore describing differences among groups.

Correlation-based techniques (correlation and regression) is most appropriate for:

testing relationships among variables

R^2 is:

the total effect across several predictors.

If *moderation* is about ________ one variable affects another, *mediation* is about ________.

when; how

What does each part of Y = bX + a represent?

x = value on the horizontal (x) axis y = value on the vertical (y) axis a = the intercept of the line b = the slope of the line


Set pelajaran terkait

Physical science ch13 kinetic and potential energy

View Set

Quantitative Analysis Study Guide

View Set

Saunders Nclex-PN - Elimination, NCLEX Pre OP Care

View Set

Medical Terminology: Ch. 15 Gynecologic Symptomatic Terms

View Set

Psychology: Chapter 7 (Thinking and Intelligence) Quiz

View Set